This document discusses various types of image operations and characteristics. It summarizes that image operations can be classified as point, local, or global based on whether the output value at a coordinate depends only on the input value at that coordinate, neighboring values, or all input values respectively. Local neighborhood operations are important and neighborhoods can be rectangular or hexagonal based on image sampling. The document also discusses parameters for video images, importance of phase and magnitude for image reconstruction, statistical descriptions of images using probability distributions and histograms, contour representations using chain codes and crack codes, and sources of noise in digital images.
This document discusses various types of image operations and characteristics. It summarizes that image operations can be classified as point, local, or global based on whether the output value at a coordinate depends only on the input value at that coordinate, neighboring values, or all input values respectively. Local neighborhood operations are important and neighborhoods can be rectangular or hexagonal based on image sampling. The document also discusses parameters for video images, importance of phase and magnitude for image reconstruction, statistical descriptions of images using probability distributions and histograms, contour representations using chain codes and crack codes, and sources of noise in digital images.
This document discusses various types of image operations and characteristics. It summarizes that image operations can be classified as point, local, or global based on whether the output value at a coordinate depends only on the input value at that coordinate, neighboring values, or all input values respectively. Local neighborhood operations are important and neighborhoods can be rectangular or hexagonal based on image sampling. The document also discusses parameters for video images, importance of phase and magnitude for image reconstruction, statistical descriptions of images using probability distributions and histograms, contour representations using chain codes and crack codes, and sources of noise in digital images.
This document discusses various types of image operations and characteristics. It summarizes that image operations can be classified as point, local, or global based on whether the output value at a coordinate depends only on the input value at that coordinate, neighboring values, or all input values respectively. Local neighborhood operations are important and neighborhoods can be rectangular or hexagonal based on image sampling. The document also discusses parameters for video images, importance of phase and magnitude for image reconstruction, statistical descriptions of images using probability distributions and histograms, contour representations using chain codes and crack codes, and sources of noise in digital images.
Download as DOC, PDF, TXT or read online from Scribd
Download as doc, pdf, or txt
You are on page 1of 11
IMAGE PROCESSING
CHARACTERISTICS OF IMAGE OPERATIONS:
There are a variety of ways to classify and characterize image operations. The reason for doing so is to understand what type of results we might expect to achieve with a given type of operation or what might be the computational burden associated with a given operation. Types of operations: The types of operations that can be applied to digital images to transform an input image a[m,n] into an output image b[m,n] (or another representation) can be classified into three categories. Point haracterization is the output value at a specific coordinate is dependent only on the input value at that same coordinate. Local haracterization is the output value at a specific coordinate is dependent on the input values in the neighborhood of that same coordinate. Global haracterization is the output value at a specific coordinate is dependent on all the values in the input image. Note: omplexity is specified in operations per pixel. Illstration of !arios types of i"a#e operations Types of nei#$bor$oo%s: !eighborhood operations play a "ey role in modern digital image processing. #t is therefore important to understand how images can be sampled and how that relates to the various neighborhoods that can be used to process an image. & Rectan#lar sa"plin# $ #n most cases, images are sampled by laying a rectangular grid over an image as illustrated in above figure. This results in the type of sampling shown in figure below. & He'a#onal sa"plin# $ %n alternative sampling scheme is shown in figure below and is termed hexagonal sampling. &oth sampling schemes have been studied extensively and both represent a possible periodic tiling of the continuous image space. 'e will restrict our attention, however, to only rectangular sampling as it remains, due to hardware and software considerations, the method of choice. (ome of the most common neighbourhoods are the )*connected neighborhood and the +*connected neighborhood in the case of rectangular sampling and the ,*connected neighborhood in the case of hexagonal sampling illustrated in -igure. -igure (a) -igure(b) -igure(c) .ectangular sampling .ectangular sampling /exagonal sampling )*connected +*connected ,*connected (I)EO PARAMETERS : 'e do not propose to describe the processing of dynamically changing images in this introduction. #t is appropriate given that many static images are derived from video cameras and frame grabbers to mention the standards that are associated with the three standard video schemes that are currently in worldwide use $ !T(, 0%1, and (2%3. #n an interlaced image the odds numbered lines (4,5,67) are scanned in half of the allotted time (e.g. 89 ms in 0%1) and the even numbered lines (8,),,7) are scanned in the remaining half. The image display must be coordinated with this scanning format. The reason for interlacing the scan lines of a video image is to reduce the perception of flic"er in a displayed image. #f one is planning to use images that have been scanned from an interlaced video source, it is important to "now if the two half*images have been appropriately :shuffled; by the digitization hardware or if that should be implemented in software. -urther, the analysis of moving ob<ects re=uires special care with interlaced video to avoid :zigzag; edges. The number of rows (N) from a video source generally corresponds one$to$one with lines in the video image. The number of columns, however, depends on the nature of the electronics that is used to digitize the image. >ifferent frame grabbers for the same video camera might produce M ? 5+), 648, or @,+ columns (pixels) per line. IMPORTANCE OF PHASE AN) MAGNIT*)E : &oth the magnitude and the phase functions are necessary for the complete reconstruction of an image from its -ourier transform. -igure (a) shows what happens when -igure below is restored solely on the basis of the magnitude information and -igure (b) shows what happens when -igure(4) is restored solely on the basis of the phase information. Fi#re +,-
Fi#re+a- Fi#re+b- !either the magnitude information nor the phase information is sufficient to restore the image. The magnitude$only image (-igure a) is unrecognizable and has severe dynamic range problems. The phase*only image (-igure b) is barely recognizable, that is, severely degraded in =uality. STATISTICS: #n image processing it is =uite common to use simple statistical descriptions of images and sub$images. The notion of a statistic is intimately connected to the concept of a probability distribution, generally the distribution of signal amplitudes. -or a given region which could conceivably be an entire image we can define the probability %istribtion function of the brightness in that region and the probability %ensity function of the brightness in that region. 'e will assume in the discussion that follows that we are dealing with a digitized image a[m,n]. Probability %istribtion fnction of t$e bri#$tness The probability distribution function 0(a), is the probability that a brightness chosen from the region is less than or e=ual to a given brightness value a. %s a increases from $ to A, 0(a) increases from 9 to 4. 0(a) is monotonic, nondecreasing in a and thus dPBda 9. Probability %ensity fnction of t$e bri#$tness The probability that a brightness in a region falls between a and aAa, given the probability distribution function P(a), can be expressed as p(a)a where p(a) is the probability density functionC The brightness probability %istribtion function for the image shown in -igure (4) is shown in -igure (a). The (unnormalized) brightness histogram of -igure (4) which is proportional to the estimated brightness probability density function is shown in -igure (b). The height in this histogram corresponds to the number of pixels with a given brightness. Fi#re+a- Fi#re+b- &oth the distribution function and the histogram as measured from a region are a statistical description of that region. #t must be emphasized that both p[a] and p[a] should be viewed as estimates of true distributions when they are computed from a specific region. That is, we view an image and a specific region as one realization of the various random processes involved in the formation of that image and that region. CONTO*R REPRESENTATIONS: 'hen dealing with a region or ob<ect, several compact representations are available that can facilitate manipulation of and measurements on the ob<ect. #n each case we assume that we begin with an image representation of the ob<ect. (everal techni=ues exist to represent the region or ob<ect by describing its contour. C$ain co%e: 'e follow the contour in a cloc"wise manner and "eep trac" of the directions as we go from one contour pixel to the next. -or the standard implementation of the chain code we consider a contour pixel to be an ob<ect pixel that has a bac"ground (non*ob<ect) pixel as one or more of its )*connected neighbors. The codes associated with eight possible directions are the chain codes and, with x as the current contour pixel position, the codes are generally defined asC 5 8 4 hain codes = ) x 9 6 , @ ()5) .egion (shaded) as it is transformed from (a) continuous to (b) discrete form and then considered as a (c) contour or (d) run lengths illustrated in alternating colors. C$ain co%e properties 2ven codes D9,8,),,E correspond to horizontal and vertical directionsF odd codes D4,5,6,@E correspond to the diagonal directions. 2ach code can be considered as the angular direction, in multiples of )6, that we must move to go from one contour pixel to the next. The absolute coordinates [m,n] of the first contour pixel together with the chain code of the contour represent a complete description of the discrete region contour. 'hen there is a change between two consecutive chain codes, then the contour has changed direction. This point is defined as a corner. Crac. co%e: %n alternative to the chain code for contour encoding is to use neither the contour pixels associated with the ob<ect nor the contour pixels associated with bac"ground but rather the line, the :crac";, in between. The :crac.; code can be viewed as a chain code with four possible directions instead of eight. 4 rac" codes = 8 x 9 5 (a) Gb<ect including part to be studied. (b) ontour pixels as used in the chain code are diagonally shaded. The :crac"; is shown with the thic" blac" line. Rn co%es: % third representation is based on coding the consecutive pixels along a row a run that belong to an ob<ect by giving the starting position of the run and the ending position of the run. There are a number of alternatives for the precise definition of the positions. 'hich alternative should be used depends upon the application. NOISE #mages ac=uired through modern sensors may be contaminated by a variety of noise sources. &y noise we refer to stochastic variations as opposed to deterministic distortions such as shading or lac" of focus. 'e will assume for this section that we are dealing with images formed from light using modern electro*optics. #n particular we will assume the use of modern, charge*coupled device (>) cameras where photons produce electrons that are commonly referred to as photoelectrons. !evertheless, most of the observations we shall ma"e about noise and its various sources hold e=ually well for other imaging modalities. 'hile modern technology has made it possible to reduce the noise levels associated with various electro*optical devices to almost negligible levels, one noise source can never be eliminated and thus forms the limiting case when all other noise sources are :eliminated;. #mages ac=uired through modern sensors may be contaminated by a variety of noise sources. &y noise we refer to stochastic variations as opposed to deterministic distortions such as shading or lac" of focus. 'e will assume that we are dealing with images formed from light using modern electro*optics. #n particular we will assume the use of modern, charge*coupled device (>) cameras where photons produce electrons that are commonly referred to as photoelectrons. 'hile modern technology has made it possible to reduce the noise levels associated with various electro*optical devices to almost negligible levels, one noise source can never be eliminated and thus forms the limiting case when all other noise sources are :eliminated;. PHOTON NOISE 'hen the physical signal that we observe is based upon light, then the =uantum nature of light plays a significant role. % single photon at ? 699 nm carries an energy of E ? h ? hcB ? 5.H@ 49$4H Ioules. 3odern > cameras are sensitive enough to be able to count individual photons. The noise problem arises from the fundamentally statistical nature of photon production. 'e cannot assume that, in a given pixel for two consecutive but independent observation intervals of length T, the same number of photons will be counted. 0hoton production is governed by the laws of =uantum physics which restrict us to tal"ing about an average number of photons within a given observation window. THERMAL NOISE %n additional, stochastic source of electrons in a > well is thermal energy. 2lectrons can be freed from the > material itself through thermal vibration and then, trapped in the > well, be indistinguishable from :true; photoelectrons. &y cooling the > chip it is possible to reduce significantly the number of :thermal electrons; that give rise to thermal noise or %ar. crrent/ AMPLIFIER NOISE The standard model for this type of noise is additive, Jaussian, and independent of the signal. #n modern well*designed electronics, amplifier noise is generally negligible. The most common exception to this is in color cameras where more amplification is used in the blue color channel than in the green channel or red channel leading to more noise in the blue channel. PERCEPTION : 3any image*processing applications are intended to produce images that are to be viewed by human observers (as opposed to, say, automated industrial inspection.). #t is therefore important to understand the characteristics and limitations of the human visual system to understand the :receiver; of the 8> signals. %t the outset it is important to realize that 4) The human visual system is not well understood, 8) !o ob<ective measure exists for <udging the =uality of an image that corresponds to human assessment of image =uality, 5) The :typical; human observer does not exist. !evertheless, research in perceptual psychology has provided some important insights into the visual system. . APPLICATIONS: CAMERAS: The cameras and recording media available for modern digital image processing applications are changing at a significant pace. (i%eo ca"eras Kalues of the shutter speed as low as 699 ns are available with commercially available > video cameras although the more conventional speeds for video are 55.5@ms (!T() and )9.9 ms (0%1, (2%3). Kalues as high as 59 s may also be achieved with certain video cameras although this means sacrificing a continuous stream of video images that contain signal in favor of a single integrated image amongst a stream of otherwise empty images. (ubse=uent digitizing hardware must be capable of handling this situation. Scientific ca"eras %gain values as low as 699 ns are possible and, with cooling techni=ues based on 0eltier*cooling or li=uid nitrogen cooling, integration times in excess of one hour are readily achieved. CONCL*SIONS # concluded that image processing is one of the pioneering application of computer graphics. >ue to the vitality of the image processing it is dealt as a separate sub<ect. #n image processing many new techni=ues were developed and still developing to overcome the disturbances created by noise when ac=uiring images through modern sensors. #n present technology, movies mainly consist of animations and graphics. #mage processing plays a ma<or role in animations. (o in the future, importance of image processing increases to a very large extent. This image processing re=uires highlevel involvement, an understanding of system aspects of graphics software, a realistic feeling for graphics, system capabilities and ease of use. REFERENCES: )IGITAL IMAGE PROCESSING 0 CARTLEMAN THE IMAGE PROCESSING HAN)1OO2 3R*SS T/C )IGITAL IMAGE PROCESSING 3GON4ALE45 R/E/6OO)S 666/GOOGLE/CO/IN7IMAGE 0.G2((#!J