Sudipan Saha Mtech Ultrasound CAD
Sudipan Saha Mtech Ultrasound CAD
Sudipan Saha Mtech Ultrasound CAD
M.Tech. Dissertation
Master of Technology
by
Sudipan Saha
under the guidance of
Prof. S. N. Merchant
June 2014
Abstract
Ultrasound imaging is a popular medical imaging modality which is routinely used in
many clinical applications. Computer assistance can make the diagnosis comparatively
easier and efficient. But devising computer aided diagnosis tools for ultrasound images are
comparatively complicated than other medical imaging modalities because of less visual
cues and presence of speckle noise. Many algorithms have been proposed in literature
regarding removal of speckle noise. In this work we have studied some of the existing
methods of speckle noise removal, which can be termed as preprocessing. More emphasis
has been laid on the post-processing tasks i.e. on studying and devising algorithms related
to the computer aided diagnosis. We have grouped computer aided diagnosis algorithms
into two groups. The first group consists of those algorithms which are based on the
explicit methods and do not depend on machine learning. In this work, explicit algorithms
have been proposed for artery detection, cyst localization and gallstone detection. The
second group consists of those algorithms which rely on machine learning. Some of the
existing popular machine learning related methods has been thoroughly studied. New
algorithms have also been proposed. The algorithms have been tested on a prostate
cancer database. Ultrasound imaging devices are often misused for illegal abortion. In
this work, the scope of using image processing techniques to stop the misuse of ultrasound
imaging devices have been explored.
i
Contents
Abstract i
List of Figures v
List of Tables vi
Abbreviations vii
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Organization of the report . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
ii
3.3.4 PSNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.5 Correlation coefficient . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.6 Speckle index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
iii
6.5.3 Shape based features . . . . . . . . . . . . . . . . . . . . . . . . . . 41
iv
List of Figures
3.1 Ultrasound phantom image and results after applying various speckle re-
ducing techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1 Two different instances of artery detection. Detected artery shown by red
circle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Detected cyst shown in red . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Illustration of gallstone detection . . . . . . . . . . . . . . . . . . . . . . . 21
v
List of Tables
vi
Abbreviations
vii
Chapter 1
Introduction
1.1 Motivation
Ultrasound imaging is one of the most popular medical imaging techniques. The main
reasons behind its popularity are its non-invasive nature, suitability for the real time
imaging. It is also comparatively less expensive than other medical imaging modalities.
It is routinely used in many clinical applications and different areas of biomedical research.
Though it is already a mature technology, it is not yet stagnant, as ultrasound technology
is evolving every day. It is striving towards more automation and decrease in size of the
ultrasound device, thus increasing portability. Ultrasound imaging can be more effec-
tively utilized if coupled with suitable image processing modules to perform automated
diagnosis. But image processing is comparatively more intricate for ultrasound images
than other medical images, due to presence of multiplicative noise and absence of visual
cues. So algorithms specialized for the ultrasound domain needs to be devised. Currently
lot of attention has been paid by the research community across the globe to develop ul-
trasound image processing based computer aided diagnosis (CAD) techniques. Also, with
the advent of ultrasound imaging, it has been misused for the gender detection of the fetus
and illegal abortion. Besides automating the diagnosis techniques, it is also essential to
develop techniques to stop misuse of ultrasound imaging for such illegal purposes. Com-
paratively less attention has been paid on this aspect by the research community. In this
thesis, we have dealt with the B-mode (basic two dimensional intensity mode) ultrasound
images. Basic preprocessing steps required for ultrasound image processing like speckle
noise removal has been studied. Major aspects and algorithms related to CAD has been
studied and also new algorithms has been proposed. We have also studied and proposed
algorithms to stop of ultrasound imaging for the illegal abortion.
1
1.2 Contribution
1. Various existing speckle noise reduction techniques have been studied and few of
those have been implemented.
2. The computer aided diagnosis techniques has been exhaustively studied from both
explicit (non-machine learning) perspective and machine learning based perspec-
tive. Novel rule based algorithms has been devised and implemented for the carotid
artery detection. Various machine learning techniques have been studied for our
application. Some of those have been implemented. Also new algorithms has been
proposed.
3. Various possible techniques regarding stopping misuse of ultrasound devises for the
fetal abortion have been studied.
4. Other perspectives has also been looked at, e.g. social impact analysis of the misuse
of ultrasound device for illegal abortion, feasibility analysis of the ultrasound image
processing algorithms on the android platform.
2
Chapter 2
3
2. B (Brightness) mode imaging which is performed by sweeping the transmitted ul-
trasound wave over plane to generate a 2D image.
3. M (Motion) mode in which pulses are emitted in quick succession and each time
either an A-mode or a B-mode image is captured. It is used to analyze moving
organs.
4. CW (Continuous Wave) Doppler which can be used for estimating blood flow in
veins.
5. PW (Pulse Wave) Doppler which can be used to estimate both blood flow and
velocity location.
6. Color Doppler, where the Pulse Wave Doppler is used to create color image which
is super-imposed on B-mode image. A color code is used to indicate the direction
of the flow. Doppler can differentiate between the RBCs moving away and moving
towards the probe [2]. RBCs moving away from the probe return ultrasound wave
at lower frequency and displayed as blue. but color can vary depending on the angle
between probe and skin. So color cannot be used as a very reliable indicator.
1. Hyper-echoic areas which are whitish, and have great amount of energy in returning
echoes. As there is a large impedance difference between the acoustic impedance of
the bone and soft tissue, so the bone and soft tissue interface produces hyper-echoic
image.
4
2. Hypo-echoic areas which are between white and black. Amount of energy in return-
ing echo is medium. As difference in acoustic impedance between various types of
soft tissues like blood, muscle, fat etc. are very less, so those results in hypo-echoic
image.
3. Anechoic areas which are seen as black. As blood is relatively homogeneous in its
acoustic impedance, so ultrasound wave can flow through it with ease. So large
blood vessels are anechoic in appearance.
Ultrasound probes or transducers can be classified based on their frequency range [2]. High
frequency probes have better axial and lateral resolution, but greater tissue attenuation.
Attenuation is basically caused by conversion to the heat. Due to attenuation, imaging
of the deeper structure is not possible with the high frequency probes. So high frequency
probes are better suited for capturing small structures. On the other hand, low frequency
probes have greater tissue penetration, but poor resolution. So low frequency probes are
suitable for capturing larger size object.
An ultrasound system overview is shown in Fig. 2.1. The first stage is beam forming.
Transducers are aligned to focus the ultrasound waves at different focal lengths along a
given scan-line. The transducers which are repeatedly switched on and then off, listens to
the reflected wave during the off time. When passing through the human body, as portion
of the ultrasound waves gets absorbed by the body, waves traveling deeper become weaker.
To compensate this distance varying signal attenuation, a distance (time) varying gain is
applied to the reflected signal, which is called called time gain compensation [5]. After
beam-forming the signal is then passed through the signal processing blocks. These include
filtering, envelope extraction, log compression. Band pass filtering is done to reduce noise
outside region of interest. Then envelope is extracted, which is further log compressed to
achieve the desired dynamic range required for display [3]. The processed signal is then
fed to the scan conversion module which generates the image [5]
5
Figure 2.1: Block diagram of ultrasound imaging device
5. Ultrasound scanning gives clear image of soft tissues which do not show up well in
X-ray images.
6. It is portable.
1. Gasy and bony structures are not easy to image, needs use of specialized procedures.
6
2. Interpretation of the image is operator skill-dependent.
7
Chapter 3
3.1 Background
Noise is a common problem in any imaging system. Noise is generally unwanted data
which decreases contrast in image or causes blurring of edges. While most of the times
noise is due to the environment, many times it can be due to the physical nature of the
system. Noise are broadly of two types: additive and multiplicative. Whereas additive
noise mainly depends on the system and can be removed by modeling the system; multi-
plicative noise is dependent on image and thus not easy to model.
In the ultrasound image, the most dominant noise is speckle noise which is multiplicative
in nature. It appears as a granular pattern because of the image formation under coherent
waves. Speckle occurs due to diffusion scattering. It happens when sound wave interferes
with small objects of size comparable to sound wavelength.
Speckle noise removal is mainly important for the following reasons:
1. For enhancing visualization, i.e. to improve the human interpretation of the images.
But following factors needs to be taken into account when designing despeckling algo-
rithms:
1. Speckle noise, though regarded as noise, may present useful diagnostic information
for radiologist.
8
3.2 Various speckle removal algorithms
The most basic filter used for denoising purposes is mean filter. It has effect of blurring
the edges [8]. Mean filter is generally used for additive Gaussian noise and not suitable
for speckle noise which obeys multiplicative model with non-Gaussian nature. So mean
filter is not suitable for speckle noise. Some of the modifications of the mean filtering has
been proposed in literature [9]. Lee filter takes a convex combination of the pixel intensity
and the average intensity within a given window. Here given an input image I(x, y) the
output image Y (x, y) is computed as:
where I 0 (x, y) is the mean intensity within a certain window region and W (x, y) is an
adaptive coefficient.
Other simple filtering methods has been proposed [10, 11]. In [10], 2-D FFT of the
ultrasound image is taken and then local peaks are detected in the frequency domain.
After removing the local peaks, inverse 2-D FFT is taken. But this method don’t deal
with multiplicative nature of speckle noise. In [11] multiplicative model of speckle noise is
converted into additive model by taking logarithm of the original speckled image. Then
DWT of the image is taken, image is thresholded and inverse DWT is performed on the
image. Finally exponential is computed to obtain the denoised image.
The most basic non-linear filter is median filtering. It produces less blurred image in
comparison to the mean filtering. It is very simple to implement, but still quite robust
in performance [8]. Various variants of the median filtering has also been proposed in
literature [12, 13, 14]. In [12] an adaptive weighted median filter has been proposed which
uses the ratio of mean to variance as a index of how uniform in value the pixels in an
area are. If the mean to variance ratio is small, it is assumed that the central pixel is
in vicinity of any image discontinuity. So additional weight is given to the central pixel.
If the ratio is found to be large, then it is assumed that the region consists of nearly
uniform pixel values and distant pixels are given greater weight for estimating the median
value. In [13] at every point of the original ultrasound image, a set of median values are
computed, by taking the median value of the pixel intensities that lie along the stick.
Then the final speckle removed image is formed by selecting the largest median value at
each point. In [14] rough set theory has been applied along with median filtering. In this
method, a 3 × 3 window is considered taking each pixel as central pixel. If the central
pixel is maximum or minimum value in the given window, then this pixel is considered as
noise pixel. Median filtering is applied only for those pixels. The major drawback of using
9
any median filtering based method is the high computational cost. Sorting of N pixels
require time complexity of O(N logN ) [15]. Homogeneous mask area filtering (HMAF)
can also be used for speckle reduction. In homogeneous mask area filtering, the most
homogeneous neighborhood around a pixel is found at location (i, j) using 3 × 3 subset
windows within a neighborhood of 5 × 5 window [16]. Speckle index for each of the 3×3
window is computed by taking ratio of the variance and the mean. 9 such speckle index
are obtained. Minimum of those values are obtained and mean value corresponding to
that 3 × 3 subset window is assigned to the pixel under consideration.
Farzana et al. has proposed an adaptive bilateral filtering based method [17]. The bilateral
filter consists of a domain filter and a range filter. Bilateral filter is non-linear, noise-
reducing filter with edge-preserving capability.
Squeeze box filter has also been used for despeckling [18, 19]. Tay et al. proposed the
method which considers local extrema of the B mode image f (i, j) as outliers. But
the method is iterative and hence not suitable for real time implementation / should be
restricted to few iterations, which will affect the performance. Yu and Acton [20] proposed
an anisotropic diffusion based technique which is also iterative in nature. Results of various
speckle reduction techniques on a noisy ultrasound phantom has been shown in Fig. 3.1.
3.3.1 SNR
SNR directly computes ratio of the variance of the signal and the noise. Higher SNR
indicates the despeckling algorithm is better.
σf2
SNR = 10 log10 (3.2)
σe2
where σf2 and σe2 is the variance of the original noise free image and the error image
(difference between original noise free image and the despeckled image) respectively.
10
3.3.2 MSE
MSE is used as an index to estimate difference between the original image and the denoised
image. When f and g indicates the original noiseless image and the denoised image
respectively, then,
M N
1 XX
MSE = (f (i, j) − g(i, j))2 (3.3)
M N i=1 j=1
where M ,N represents nos. of rows and nos. of columns of the image, i.e. M N is the size
of the image.
3.3.3 RMSE
RMSE is calculated by taking square root of the MSE.
√
RMSE = MSE (3.4)
3.3.4 PSNR
PSNR can be regarded as the ratio between power of a signal and the power of corrupting
noise. PSNR in decibels is expressed as:
(f − f¯)(g − ḡ)
P
COC = pP (3.6)
(f − f¯)2 (g − ḡ)2
P
11
where σ(i, j) and µ(i.j) represents the standard deviation and mean corresponding to a
certain window domain. M ,N represents nos. of rows and nos. of columns of the image,
i.e. M N is the size of the image.
(a) Input noisy ultra- (b) Mean filtered out- (c) Median filtered out-
sound phantom put put
(d) Modified median fil- (e) HMAF [16] output (f) Squeeze box [18] fil-
tered [14] output tered output
Figure 3.1: Ultrasound phantom image and results after applying various speckle reducing
techniques
The MSE, RMSE, SNR, PSNR values for different speckle reduction techniques tested
on the ”Shepp-Logan” phantom image has been listed in Table 3.1.
12
Method MSE RMSE SNR PSNR
Mean filtering 0.011 0.11 5.50 67.49
HMAF [16] 0.009 0.09 6.62 68.56
Median filtering 0.006 0.08 8.36 70.25
Modified median filtering [14] 0.004 0.06 10.12 72.00
Squeeze box filtering[18] 0.002 0.04 13.28 75.26
Adaptive bilateral[17] filtering 0.002 0.04 13.28 75.26
Table 3.1: Estimated statistical parameters for various speckle reduction techniques for
”Shepp-Logan” image
13
Chapter 4
Computer aided diagnosis (CAD) are procedures which uses computing devices to assist
in the analysis / interpretation of the medical images. Ultrasound image processing based
computer diagnosis can be approached in two ways:
In the explicit approach, some particular rules are known to the algorithm developer or
some rules are derived by experimenting with the available images / data. The algorithm
developer uses those rule set to formulate an explicit algorithm. In this method the
algorithm developer depends on his/her knowledge about the domain to formulate the
rules. Then those rules are used for designing the required CAD application. Some of
the examples where such approach has been partially / fully used are [21, 22, 23, 24, 25].
In [21], a computerized breast lesion segmentation technique has been proposed. In [22],
an algorithm for tumor detection in breast ultrasound images has been proposed. The
breast images are normalized followed by fuzzification. Then the image is enhanced and an
optimum threshold is estimated which is used to binarize the image. Watershed transform
is used on the binarized image to obtain the final tumor boundary. In [23], Hafizah et al.
have experimented with four different groups of the kidney ultrasound images, normal,
bacterial infection, cystic and the kidney stone. They have estimated different intensity
histogram features and different Gray level co-occurrence matrix (GLCM) features for the
different classes. Based on those features authors have tried to conclude which features are
the best suited to differentiate between different categories. While their work is mainly
to derive a set of rules, their work can be used further for machine learning / ANN
based systems also. In [24], Raja et al. has considered three different classes of kidney;
14
normal, cystic and renal disease. Authors have obtained range of values for five different
parameters for three different cases of kidney. In [25], Usha and Sandya have proposed
an algorithm for measurement of ovarian shape and size.
We have implemented algorithms for following tasks based on explicit approach:
• Cyst localization
i. The target object’s filled area is calculated. Filled area does not include any other
object which is completely enclosed by our target object, but not a part of the target
object. Let filled area be a.
15
iv. Major axis length of the ellipse which has the same normalized second central mo-
ments as the given object is calculated. Let major axis length be l1 . Similarly the
minor axis length is evaluated. Let the minor axis length be l2 .
The proposed roundness metric can be minimized to assess the roundness of any given
object.
i. Let the input image Iartery1 . Histogram equalization operation is applied on Iartery1
to obtain Iartery2 .
ii. Then the image is binarized and then inverted to obtain Iartery3 . So, darker regions in
the original image correspond to the white pixels in the binarized image and whiter
regions in the original image correspond to the black pixels in the binarized image.
iii. Morphological closing operation is applied on the image Iartery3 to obtain Iartery4 . It
is performed successively with two different structuring elements. It is first performed
with a circular disk as structural element and then with a line as structuring element
with varying angle (0◦ to 90◦ ).
iv. All the connected white components are detected in Iartery4 . A connected component
is a group of spatially connected adjacent pixels which has same intensity value (either
all white or all black) in the binarized image. Let the connected components are Ci
where i varies from 1 to Ncount .
16
v. A value tmaxlength is calculated by calculating the 0.3 times the value of maximum of
the width and height of the image. This value is used as a threshold. The connected
components having major axis value greater than this threshold value are removed
from further consideration. This concept has grown from the fact that region of
interest (here artery) cannot occupy a very large area of the ultrasound image. If
the target object was indeed occupying such a major chunk of the ultrasound image,
then we would have not been interested in localizing it. Let the remaining connected
components are Ci where i varies from 1 to Mcount and Mcount ≤ Ncount .
(a) Only those connected components which has pixel value greater than tarea are
kept into consideration.
(b) Roundness metric mround is evaluated for all connected components in consid-
eration. Let index of the connected component having best roundness metric
value (i.e. minimum) is ibest .
(c) The counter corresponding to the ibest is is increased by one, i.e.
vii. The connected component with higher value of g is declared as the region correspond-
ing to the detected carotid artery. In case of tie between two regions the region with
more numbers of pixel is declared as the desired region.
viii. Centroid of the identified connected component is calculated to estimate the center
of the artery.
ix. The average value of major axis length and minor axis length of the ellipse having
same normalized second central moments as the identified connected component is
calculated to estimate the radius of the artery area.
4.1.3 Result
The proposed algorithm has been tested on images from the database which has been used
in [26, 27]. The algorithm has been tested total amount of 283 images from that database
17
(a)
(b)
Figure 4.1: Two different instances of artery detection. Detected artery shown by red
circle.
18
both with and without applying any noise removal step. While no noise removal step
was applied, the artery was correctly identified in 188 images, thus yielding an accuracy
of 66.43%. The algorithm has been tested with two types of noise removal step, median
filtering and adaptive bilateral filtering. While applying median filtering, artery was
detected in only 180 images and while applying adaptive bilateral filtering 187 images
yielded expected output. Thus, speckle removal did not improve the performance and
actually degraded the performance in case of median filtering. This might be due to the
fact that roundness of the artery might be negatively affected / reduced while applying
the noise removal step.
• Cyst generally appears much dark, i.e. pixel values corresponding to cyst are very
less.
ii. A threshold value is calculated by estimating the 20 percentile value of the all the pixel
values present in the image. The image is binarized with this calculated threshold
value to obtain Icyst2 .
iii. The binarized image is inverted to obtain Icyst3 . So, darker regions in the original
image correspond to the white pixels in the binarized image and whiter regions in the
original image correspond to the black pixels in the binarized image.
iv. Morphological closing and opening operations are successively applied on the bina-
rized image to obtain Icyst4 .
vi. Among the remaining white connected regions, roundness metric mround is calculated
for all the regions.
19
Figure 4.2: Detected cyst shown in red
vii. The region with the maximum value of the roundness metric is declared to be the
region corresponding to the cyst.
viii. Centroid of the identified connected component is calculated to estimate the center
of the cyst area.
ix. The major axis length and minor axis length of the ellipse having same normalized
second central moments as the identified connected component is used along with
identified center to define an ellipse. This is defined to be the region corresponding
to the cyst.
As any publicly available database for cyst is not available to the best of our knowledge,
so the algorithm was tested on few images collected from internet.
• It lies within the gall bladder which generally looks darker in the ultrasound images.
Using the above properties, Following algorithm has been devised for gallbladder stone
detection:
20
(a) Input image (b) Output image (pixels probable to be gallstone
shown in white)
ii. Median filtering is applied on Igall1 to remove local variation. The resulting image is
Igall2 .
iii. Canny edge detection is applied on Igall2 to obtain edge detected image Igall3 .
iv. Morphological closing operation is applied on Igall3 to obtain Igall4 . The structuring
element used is line with varying angle (0◦ to 90◦ ).
v. White connected regions with very small filled area are removed from the image to
obtain Igall5 .
vi. The connected components are detected from the inverted version of the image Igall5 ,
i.e. dark connected components are detected. Let the dark connected component
detected image be Igall6 .
vii. Now for any white pixel in the image Igall5 , we traverse from the same pixel co-
ordinate in Igall6 towards left to reach the first dark connected component, i.e. first
pixel with value one in Igall6 . Similarly, we traverse towards right to find the first right
dark connected component, i.e first pixel with value one in Igall6 . If these two pixels
belongs to the same dark connected component, then the pixel in Igall5 is assumed
probable to be a part of a gallstone.
As any publicly available database for gallstone is not available to the best of our
knowledge, so the algorithm was tested on few images collected from the internet.
21
Chapter 5
Machine learning can be used in CAD whenever we cannot directly write an explicit
program to solve a given problem. The relation between input image and the output
inference is sometimes intricate. Mere human observation is not sufficient to find out such
relations. Moreover, even if some complicated relations are discovered through rigorous
observation, those relation may not generalize well. The relationship between input and
output may hold differently in different time / environment. So, rather than writing
explicit different programs for all such different circumstances, it is preferable to design
general purpose systems which would adapt to the circumstances.
Even the machine learning methods can be classified into various groups like super-
vised learning, unsupervised learning, semi-supervised learning. If a substantially large
amount of dataset (sets of input images and inferred result from it) is available to us,
then supervised learning is most suitable for use in CAD applications. Supervised learn-
ing algorithm can learn from the given example data. It learns a mapping from the input
to the output while a supervisor provides the correct values [29]. In the unsupervised
learning, only input data is given and the regularity / statistical similarity in it is found
by the machine learning system.
While designing a machine learning based system, some of the important factors to
be taken into account are:
2. Dimensionality reduction
22
5.1 Features for classification
To develop a ultrasound image processing based CAD system, selection of suitable texture
feature is required. Perception of texture is dependent on the spatial arrangement of
intensity values. Texture is characterized by the spatial distribution of intensity values in
a neighbourhood and so it is an area attribute.
Some of the popularly used texture features are:
where Ng is the nos. of possible gray levels. Similarly, variance, skewness and kurtosis
can be defined as [31]:
Ng −1
X
µ2 = (I − m1 )2 p(I)
I=0
Ng −1
X
µ3 = (I − m1 )3 p(I)
I=0
Ng −1
X
µ4 = (I − m1 )4 p(I)
I=0
All these first order statistical features has different physical significance. The variance
can be interpreted as the measure of the deviation of the intensity level from the mean
intensity level. The skewness can be interpreted as the measure of histogram asymmetry
around the mean [31].
To capture the first order statistics, the semivariogram approach can also be used [32].
23
5.1.2 Gray-level co-occurrence matrix
Though the features generated from the first order gray level statistical features provides
information related to the gray level distribution of the image but they do not provide
any information regarding the relative position of various gray levels within the image
[31]. To exploit the information embedded in the gray level configuration, various second
order statistical features are defined. Gray-level co-occurrence matrix (GLCM) is most
popular among those.
GLCM can be defined a matrix of relative frequencies Pθ,d (I1 , I2 ) which describes how
frequently two pixels with gray-levels I1 , I2 appear in the window separated by distance
d in direction θ. Thus co-occurrence matrix is function of two parameters, the distance
between two pixels d and the relative orientation θ. Generally, θ = 0◦ , 45◦ , 90◦ , 135◦ is
considered. Various features can be defined based on the GLCM matrix.
Haralick et al. has described a set of textual features based on the GLCM matrix.
Some important features among these are Angular Second Moment (ASM) or energy,
contrast, correlation, homogeneity and entropy. These features are defined as below [31]:
X
ASM = P (I1 , I2 )2
I1 ,I2
X
Contrast = |I1 − I2 |2 log P (I1 , I2 )
I1 ,I2
X P (I1 , I2 )
Homogeneity =
I1 ,I2
1 + |I1 − I2 |2
ASM measures the smoothness of the image. It is the sum of the squared elements in
the GLCM. More smooth the image, more will be the value of the ASM. This is also called
energy. It measures textual uniformity. ASM has high value when gray level distribution
has constant or periodic form. Contrast measures local variations and takes high value
when there is high local variation. Homogeneity measures high values for low-contrast
images. Entropy is an index of randomness and it takes low values for smooth images. It
measures complexity of the image. Entropy is inversely correlated to energy. GLCM based
features has been used in many works related to ultrasound image processing [33, 30, 23].
24
local surface at a finer scale resembles the global surface at a coarser scale [34]. Fractal
dimension of an object is greater than the Euclidean dimension. Concept of the fractal
dimension can be regarded as an indicator of the surface roughness. It has been used in
breast ultrasound image classification [35]. Various features can be computed from the
concept of fractal. One such feature set is Segmentation-based Fractal Texture Analysis,
or SFTA) proposed by Costa et al. in [36]. To extract SFTA features, input image is
decomposed into a set of binary images by two-threshold binary decomposition algorithm.
From the set of binary images, fractal dimension of the resulting regions are computed to
describe the texture pattern.
25
window size were extracted (from 8 pixels × 8 pixels to 25 pixels × 25 pixels). Then the
resulting vector of 64–625 dimensions was projected using PCA into spaces ranging from
10 to 50 dimensions.
26
• Breast ultrasound image classification [44, 45].
1. SVM has flexibility to deal with high-dimensional data, which is important in our
case. The upper bound on the generalization error does not depend on dimension
of the feature space in case of SVM. If four first order features mentioned and 14
Haralick features are taken at four different angles and four different distances, then
the data would have dimension of 228. Hence our data would have high dimension
and here SVM plays its role.
2. SVM shows high accuracy compared to most of the supervised learning algorithms.
Let there be a training data with two classes, and labels for the classes be +1 (assume,
corresponding to cancerous) and -1(assume, corresponding to non-cancerous). Let the
data be {xi , yi }N
i=1 . Inner product is defined as:
X
wT x = w i xi (5.3)
i
f (x) = wT x + b (5.4)
27
between the positive and the negative regions is called the decision boundary of the
classifier, which is defined by a hyperplane given by the following equation:
x : f (x) = wT x + b = 0 (5.5)
A maximum margin classifier can be defined as a function which maximizes the geometric
margin [50]. Maximizing the geometric margin is to minimize ||w||2 . So we get need to
minimize ( 21 )||w||2 subject to:
This equation classifies each example correctly which is possible for completely sepa-
rable data. For data which is not completely separable / there are some misclassified data
even in the training data, often greater margin can be achieved by allowing the classifier
to mis-classify few points. Thus to allow errors, we introduce a new term ξi and we modify
(5.7) as:
yi (wT xi + b) ≥ 1 − ξi ∀i = 1, 2, ..., n (5.8)
Now the parameter to minimize will be modified and a new term will be introduced to
penalize the misclassification and the margin error. So now we will minimize w.r.t. w
and b, ( 12 )||w||2 + C ni=1 ξi , where (5.8) is the constraint.
P
The above formulation is called the soft margin SVM. The above equation is quadratic in
convex and can be solved using any convex optimization technique like Lagrange multiplier
[51], Sequential Minimal Optimization (SMO). Using the method of Lagrangian multiplier,
we can obtain the dual formulation of the above equation which is expressed in terms of
the variable αi [50]:
n n n
X 1 XX
maximize α αi − yi yj αi αj xTi xj (5.9)
Pn i=1
2 i=1 j=1
subj. to i=1 yi αi =0
o≤αi ≤c
28
The example xi for which αi > 0 are the points on the margin and are called the support
vectors. This dual formulation depends on data only through dot product. The dot
product can be replaced with a non-linear kernel function and in this way can handle
non-linear separation case. The basic kernels used are [52]:
1. Linear k(xi , xj ) = xTi xj
29
ii. Feature values of the test data are also normalized with the same values as the training
data.
iii. PCA is applied on the training data. Thus training data is transformed to new feature
set. Only those feature set which accounts for 90% of the variance in the training
data is retained. Other features are deleted. Test data is also transformed using the
same vectors used to transform the training data. Same set of features as training
data are also retained in the test data.
iv. At the end of the each training data, we append its label (converted to numeric value),
i.e. +1 for one class and -1 for the other. At the end of the each test data, we append
a label 0. Here it is to be noted that 0 is the mean of +1 and -1. Now this numeric
values will also be treated as a separate feature for the data. While for the training
data, the values indicate clear separation, value 0 indicates uncertainty of the test
data.
v. Now training data and test data are combined and k-Means algorithm is applied on
it with k = 2.
vi. In one of the clusters more training data with label equal to 1 will be present. Test
data present in that cluster are assigned label +1. In the other cluster more training
data with label equal to -1 will be present. Test data present in that cluster are
assigned label -1.
Instead of using PCA, kernel PCA can also be used in this method, which is a kernel-
ized version of the PCA [29].
30
positive data and negative data lies in two or more different zones of the feature space,
where each zones are not related with each other. So we are proposing to use RBF kernel
SVM with the k-Means algorithm. The algorithm we have used is described as following:
Training Phase
ii. All such clusters are separately trained with SVM with RBF kernel.
Test Phase
i. Distance of the test data is measured from the centroid of all the k clusters. The
cluster from which data is nearest is assumed to be cluster where the test data belongs.
ii. The test data is tested with the corresponding SVM of that cluster and the output
label is predicted.
5.4 Results
Publicly available database for ultrasound images are very scarce. We have used the
prostate database which has been used in [39]. It is a prostate cancer database. In the
database, along with the images, two points for each image has been provided which
defines a region of interest (ROI). In [39], the pixels in that ROI are analyzed.
31
In our work, we have tried to differentiate prostatitis and tumor in advanced stage.
Henceforth we would be referring advanced stage tumor as cancer. Prostatitis is a medical
complication involving inflammation of the prostate.
We have used sensitivity and specificity to evaluate the performance of the system.
Suppose we have two class, i.e. C1 and C2 and our prime objective is to identify C1 , then
sensitivity (S1 ) as:
Numbers of occurrence correctly detected as C1
S1 = (5.11)
Numbers of actual occurrence of C1
We define specificity (S2 ) as:
Numbers of occurrence correctly detected as C2
S2 = (5.12)
Numbers of actual occurrence of C2
Sensitivity is also known as true positive rate. In our case if we define C1 as cancer and
C2 as prostatitis, then, sensitivity is redefined as:
Numbers of occurrence correctly detected as cancer
S1 = (5.13)
Numbers of actual occurrence of cancer
Specificity is redefined as:
Numbers of occurrence correctly detected as prostatitis
S2 = (5.14)
Numbers of actual occurrence of prostatitis
We define another parameter, Stotal as
Stotal = S1 + S2 (5.15)
For an ideal classification system, sum of sensitivity and specificity (Stotal ) would be equal
to two as both of those are expected to be one for an ideal classification system. But
ideal classification system does not only depend on the classifier, but also on the classes
to be classified. Classes should be completely separable to yield sum of sensitivity and
specificity as two. But in most practical system, classes are not completely separable.
Algorithms has been tested with window size (wsize ) equal to 7, 15, 31, 121. Four
GLCM features contrast, correlation, energy, homogeneity were calculated at d = 1, 2, 3, 4
and at varying angles θ = 0◦ , 45◦ , 90◦ , 135◦ . Thus we get 64 features. We defined a
straight line joining the two points given in the database which defines the ROI of an
image. For wsize = 7, 15, 31, points were sampled along this straight line and centering
on those points the window was defined and the features were calculated. Then median
and standard deviation of all those features were calculated to get a feature vector of size
128 for an image. 45 images per class were used for training and rest of the images were
32
used for testing. The algorithms failed to perform in satisfactory manner with window
size 7. When using window size 15, applying SVM with RBF kernel and C = 0.1, γ = 0.5
sensitivity of 0.65 and specificity of 0.53 was attained. kNN failed to perform satisfactorily
with window size 15 and with k = 3 sensitivity of 0.30 and specificity of 0.74 was attained.
When using window size 31, applying SVM with RBF kernel and C = 0.1, γ = 0.5 sen-
sitivity of 0.75 and specificity of 0.47 was attained. kNN did not yield satisfactory result
with this window size also.
When using wsize =121, only one window was defined per image, centering on the
middle point of the straight line joining two given points in the database. A feature
vector of length 64 was defined for this window. When using kNN, with k = 3, sensitivity
of 0.60 and specificity of 0.47 was attained. With k = 5 sensitivity of 0.7 and specificity
of 0.52 was attained. Variation of Stotal with k has been shown in table 5.1. When
using SVM with linear kernel with C = 0.1 sensitivity of 0.65 and specificity of 0.66 was
attained. With C = 0.2 sensitivity of 0.72 and specificity of 0.5 was attained. When
using the RBF kernel, with C = 0.1, γ = 0.5 sensitivity of 0.70 and specificity of 0.65
was attained. With C = 0.1, γ = 0.7 sensitivity of 0.70 and specificity of 0.73 was
attained. When using the k-Means based method described in section 5.3.3 sensitivity of
0.70 and specificity of 0.58 was attained. Using the method combining SVM and k-Means
as described in section 5.3.4, for C = 0.1, γ = 0.5 sensitivity of 0.90 and specificity of
0.45 was attained. Variation of sum of sensitivity and specificity (Stotal ) with γ when C
is fixed at 0.1 has been shown in table 5.2. Variation of Stotal with C when γ is fixed
at 0.5 has been shown in table 5.3. The same set of experiments were performed with
applying PCA and transforming feature vector to length 15. But that did not induce any
significant change in the performance.
The SFTA features [36] were also calculated and appended with the calculated GLCM
features. But the performance did not show any improvement. When using the GLCM
features and the SFTA features together, when using SVM with RBF kernel, with C = 0.1,
γ = 0.5 sensitivity of 0.75 and specificity of 0.56 was attained. When using the linear
kernel with C = 0.1 sensitivity of 0.70 and specificity of 0.55 was attained. When using
kNN with k = 3 sensitivity of 0.6 and specificity of 0.47 was attained. When using the
k-Means based method described in section 5.3.3 sensitivity of 0.65 and specificity of 0.60
was attained. Thus we conclude that adding SFTA features did not yield any additional
benefit in this case. In fact the performance degrade a bit which might be due to Hughes
phenomenon [38]. Similarly combining the features from window size 15, 31 with features
from window size 121 also degraded the performance. Another experiment was conducted
33
k Sum of sensitivity
and specificity (Stotal )
1 1.11
3 1.07
5 1.22
7 1.23
9 1.25
11 1.25
13 1.26
15 1.26
17 1.21
19 1.21
21 1.22
23 1.23
25 1.19
by obtaining results separately from features of window size 15, 31, 121 and performing
voting based on those results to obtain the final result. Voting is a simple method to
combine multiple classifiers [29]. No improvement was observed and the performance
actually degraded from the performance when window size is 121. With C = 0.1, γ = 0.5
sensitivity of 0.70 and specificity of 0.54 was attained.
Though we failed to attain very high sum of sensitivity and specificity, it is primarily
due to the poor image quality in the database and high overlap between the classes. An
experiment using SVDD was conducted to verify the overlap between the classes. SVDD
was trained using RBF kernel and training data corresponding to cancer. While tested
on the cancer test data, it accurately identified 65% of the data as belonging to cancer
data. But, when it was tested on the prostatitis data it identified 57% of the prostatitis
data also as belonging to cancer data. It indicates a high overlap between the data. It
is to be noted the authors in [39], the work where this database was originally used, also
failed to attain satisfactory results. The authors has also emphasized that they failed to
achieve desirable results mainly due to poor quality images in the database.
34
γ Sum of sensitivity
and specificity (Stotal )
0.1 1.30
0.2 1.25
0.3 1.24
0.4 1.31
0.5 1.35
0.6 1.40
0.7 1.43
0.8 1.41
0.9 1.38
1.0 1.37
1.3 1.27
1.5 1.26
C Sum of sensitivity
and specificity (Stotal )
0.1 1.35
0.2 1.32
0.3 1.24
0.4 1.28
0.5 1.27
0.6 1.27
0.7 1.26
0.8 1.25
0.9 1.25
1.0 1.25
1.3 1.14
1.5 1.13
35
5.5 Adapting to domain change
The ultrasound images taken by one device at certain settings may differ largely from the
images taken by other device at certain other settings. Generally, the literatures related
to ultrasound image processing also reports the device from which the images has been
collected. Generally, the training and test images are taken from the same device. But, if
we aspire to design a CAD system, which would work for images taken from any device,
then we would require to design in such way that system would work even if training
and test images have been taken from different devices. Fortunately, solution for such
problem has already been explored in literature, though not in context of ultrasound
image processing. This problem can be considered to be a domain adaptation problem,
which has been discussed in [59].
In supervised learning, labeled training samples T = (xi , yi ) are drawn from domain
D and unlabeled test data are also drawn from the same domain D. So it is possible
to estimate a distribution P̂ (x, y) from training data which will correctly model the true
distribution P (x, y) which governs D.
In unsupervised learning, labeled training samples not available. Only unlabeled data
available. We have to infer an approximation P̂ (x, y) for the true unknown P (x, y) by
exploiting the statistics of the given data.
Transfer learning refers to problem of retaining knowledge from one task / domain to
another task / domain.
Single task transfer learning is of two types:
1. Sample selection bias: Unlabeled test data are drawn from same domain D of the
training data. But very small amount of the training data leads to poor estimation
of the P̂ (x, y) which hence do not approximate well true distribution P (x, y).
2. Domain adaptation: Unlabeled test data are drawn from a target domain Dt
different from the source domain Ds of the training samples.
36
Chapter 6
Ideally gender ratio of male and female should have been 1:1. But it has never been true.
Leaving few exception, in all society across the globe, male child are preferred. This unfair
preference has been aided recently by growth in technology. With advent of ultrasound
imaging devices, gender of the fetus can be easily determined. Ultrasound imaging device
can be considered as a bliss for mankind and has catalyzed the recent advancement of the
health care system. But unfortunately this technology has been misused causing worser
gender imbalance. Gender-selective abortion is a common practice mainly in the third
world nations. While the source of the problem is technology, the problem needs to be
solved technologically. One of the possible solution is to segment the genitalia from the
fetal ultrasound image so that it can be blocked from being displayed in the ultrasound
imaging device [60]. This will contribute to curbing of female foeticide. As preventing
display of the genitalia of the fetus will stop knowing the gender of fetus, gender based
abortion is expected to reduce.
1. Almost in all societies there is trend of marrying off girls and sending to the hus-
band’s house. So there is a stereotype that male will look after parents.
2. There is a belief that male child will carry forward family lineage. They will be the
bread-earner for the family.
37
Year 0-6 yrs girls per 1000 boys
1961 976
1971 964
1981 962
1991 945
2001 927
3. In countries like India, one of the main reason is dowry system [63]. The amount
to be paid while marrying a girl is very high and makes many family pauper. So
people prefer male child.
4. In the table 6.2 [64], gender ratio and literacy rates of few states of India has been
shown which shows strong correlation between sex ratio and literacy rate. From
this we can conclude that poor education is also one of the reason behind skewed
sex ratio. Only in some exceptional states like Delhi, despite higher literacy rate,
sex ratio is skewed.
5. Polygamy is also a cause behind male preference. In societies where polygamy exists,
people with better economic conditions are more biased towards male child. This
has been explained by Robert Trivers and Dan Willard (Trivers-Willard hypothesis,
[65]).
38
6. In many cases, parents are neutral in case of 1st child, but preference for son in-
creases with subsequent child.
3. Trend of trade of female (women trafficking) increases. Women are sold to areas
where foeticide has caused skewed sex ratio. Such incidence has been reported in
Haryana. Skewed sex ratio of Haryana has caused trafficking of girls from other
states to Haryana [68].
39
translated into female foeticide unless it was aided by advanced ultrasound technology.
It is clearly evident from table 6.1 that with advent of the ultrasound technology, gender
ratio has deteriorated in faster pace. A steep decrease is observed from 1981-2001. Though
ultrasound imaging technique was invented earlier, it ushered in the same period. In the
early 1980s, computer software and ultrasound imaging technology combined and age of
modern ultrasound stared [69]. Ultrasound devices are becoming cheaper, portable and
accuracy has increased manifold increasing its misuse. Even companies like GE, who are
major saler of Ultrasound device, has agreed that the misuse exists and has taken steps
to reduce misuse of ultrasound technology and promoting ethical ultrasound [70]. It has
been reported by Burns [71] that in India even local entrepreneurs are running portable
clinics for sex determination in running vans.
Due to easy availability of technology, sex ratio is getting worser in the cities and towns
[61]. Thus we see that misuse of ultrasound technology is one of the major reason behind
skewed gender ratio. While problem has been scaled up by technology, we cannot rely
simply on ethics of doctor or wait for government to implement strict rules. There is a
urge to solve this problem using technology itself. More sophisticated techniques needs
to be devised using which misuse of ultrasound device can be curbed.
6.5 Methods
Not many literatures exist related to this topic and also there is no publicly available fetal
ultrasound image database best to our knowledge. A method has been proposed by Tang
and Chen [72] where authors has proposed a two stage algorithm. In the first stage (called
as ”rough” stage) pixels of interest are identified by feature based detection. In the final
(called as ”fine” stage) a supervised learning framework has been used. A method has been
proposed in [60], a sliding window approach is used where for the training images, features
for each window is calculated and those are marked as whether containing genitalia or
not. Then the training data is used to train a SVM based system. The trained system is
then used for the identifying the window containing genitalia.
Generalizing, when we have a large dataset, a supervised learning framework can be
used, which can be summarized in the following steps:
1. Manually segmenting the images of the training dataset into genital and non-genital
regions.
2. Calculating textual features corresponding to both the regions and using those es-
40
timated textual features values to train a supervised learning system. Any suitable
supervised learning algorithm can be used. Samples corresponding to the genitalia
region can be treated as positive samples and samples corresponding to the non-
genitalia regions can be treated as negative samples.
But currently no such database is publicly available. Supervised learning algorithms are
not very practical when such a dataset is not available.
Suitable features can be identified and an explicit algorithm can be implemented for
the genitalia detection. For feature based identification, characteristics / features need to
be identified which distinguishes the genitalia region from the non-genitalia region. Some
of the features which have been identified to be suitable [72] are discussed in following
subsections.
41
(a) Male genitalia, taken from [72] (b) Our intended output
E.g. small window won’t capture the contrast between the amniotic fluid and the genitalia
properly. So, the window should be appropriately chosen to to capture these properties.
42
Chapter 7
7.1 Conclusion
Ultrasound imaging is currently very popular medical imaging modality. More automation
can be engineered in the field of ultrasound imaging to facilitate better health services.
We tried to explore the field in depth and breadth.
Speckle noise removal is very important for ultrasound image processing. Some of
the existing techniques of speckle noise removal has been studied. Few of those has
been implemented and performance has been compared in terms of various statistical
parameters. Adaptive bilateral filtering and squeeze box filtering outperformed other
methods.
The computer aided diagnosis has been studied from both machine learning based
approach and non-machine learning based explicit approach. Explicit algorithm for cyst
detection, artery detection, gall bladder stone detection has been proposed and imple-
mented. Artery detection has been done both on the unmodified input image and also on
the speckle removed input image. Our algorithm for artery detection performed better
on the unmodified image. It may be inferred that roundness property of the artery gets
distorted during speckle removal.
The aspects involved in designing a machine learning based CAD has been discussed
in details. A GUI has been developed which can help radiologists to provide researchers
ultrasound images with ground truth data. Methods like SVM, kNN has been studied,
tested on available data and results has been compared. A novel classification method
based on k-Means clustering has been proposed and implemented. A method combining
SVM with kNN has also been studied and implemented. SVM with RBF kernel performed
better than other methods. The k-Means based method also yielded satisfactory result
43
both in terms of sensitivity and specificity. In our case, larger window size for capturing
features yielded better result. Appending SFTA features with GLCM features did not
improve performance. Also combining the features from different window size degraded
the performance. Reducing feature size by applying PCA also did not improve perfor-
mance in terms of sensitivity and specificity. The experiment with SVDD also proved the
overlap between classes in the data.
Adapting to changing domain is an important aspect to incorporate adaptability in
the CAD systems. A SVM based method for domain adaptation has been studied.
Misuse of ultrasound technology for fetal gender determination and its ill impact on
gender ratio has been studied. Analysis has been done about how growth of ultrasound
imaging has increased illegal abortion. Possible ways to detect the fetal genitalia auto-
matically so that it can be stopped from being displayed has been discussed. Supervised
learning algorithm cannot be used unless there is a substantially large database available.
Hence explicit algorithm needs to be devised for fetal genitalia detection. On successful
implementation of such technique, misuse of ultrasound imaging could be stopped.
Also some other aspects like possibility of implementation of ultrasound image pro-
cessing modules in Android Smart devices has been studied. Opportunities of image
processing in Android device has been explored and simple video processing Apps. has
been developed.
2. Both the machine learning and non-machine learning (explicit) techniques for CAD
needs to be studied more subtly. A demarcation needs to be decided regarding
where we can use which type of method. Suitability of different machine learning
methods for different tasks need to be studied in details.
3. Existing texture features need to be studied in more details while aiming at inno-
vating more suitable texture feature for ultrasound images. Ultrasound images are
much different than most of the other images. So it is required to investigate in
more depth for finding suitable feature set for the ultrasound images.
44
4. Deep learning [73] can also be explored for better representation of the ultrasound
images. It needs to be investigated in details.
8. Techniques for stopping misuse of ultrasound device for gender determination and
illegal abortion need to be studied in more details. More suitable features need to
be studied for the fetal genitalia detection. Successfully integrating such algorithms
with the ultrasound device is required. The system needs to be designed in such a
way so that it is not easily vulnerable to hackers. So the security aspects also need
to be studied.
45
Appendix A
To validate the devised algorithms, ultrasound images along with ground-truth data are
required. A Matlab based Graphical User Interface has been developed which can be used
by radiologist / expert doctor in providing us ground-truth data. The expert can identify
the contour corresponding to the organ, input the name of organ and input any special
comment (e.g. whether infected with cancer). All those data get saved to a database. A
screen-shot of the GUI has been shown in figure A.1.
46
References
[3] M. Ali, D. Magee, and U. Dasgupta, Signal processing overview of ultrasound systems
for medical imaging. Texas Instruments, NOVEMBER 2008.
[4] K. Kaur, “Digital image processing in ultrasound images,” Int. J. Recent Innovation
Trends Comp. Comm., vol. 1, pp. 388–393, March 2013.
[5] N. K. Ragesh, A. R. Anil, and R. Ragesh, “Digital image denoising in medical ultra-
sound images: A survey,” in Proc. ICGST, pp. 67–73, 2011.
[6] K. K. Shung, “Diagnostic ultrasound: past, present, and future,” J. Med. and Bio.
Engg., vol. 31, no. 6, pp. 371–375, 2011.
[9] J.-S. Lee, “Speckle analysis and smoothing of synthetic aperture radar images,” Com-
puter graphics and image processing, vol. 17, no. 1, pp. 24–32, 1981.
[10] K. Kaur, B. Singh, and M. Kaur, “Speckle noise reduction using 2-d fft in ultrasound
images.,” International Journal of Advances in Engineering & Technology, vol. 4,
no. 2, 2012.
47
[11] A. Vishwa and M. S. Sharma, “Speckle noise reduction in ultrasound images by
wavelet thresholding,” International Journal of Advanced Research in Computer Sci-
ence and Software Engineering, vol. 4, no. 6, 2012.
[12] T. Loupas, W. McDicken, and P. Allan, “An adaptive weighted median filter for
speckle suppression in medical ultrasonic images,” Circuits and Systems, IEEE
Transactions on, vol. 36, no. 1, pp. 129–135, 1989.
[13] N. Ragesh, A. Anil, and R. Rajesh, “Digital image denoising in medical ultrasound
images: a survey,” in Icgst Aiml-11 Conference, Dubai, UAE, pp. 67–73, 2011.
[14] C. Bo, G. Zexun, Y. Yang, and L. Xiaosong, “Application of the rough set to im-
age median denoising,” in Software Engineering, Artificial Intelligence, Networking,
and Parallel/Distributed Computing, 2007. SNPD 2007. Eighth ACIS International
Conference on, vol. 1, pp. 75–78, IEEE, 2007.
[15] S. Wu, Q. Zhu, and Y. Xie, “Evaluation of various speckle reduction filters on medical
ultrasound images,” in Engineering in Medicine and Biology Society (EMBC), 2013
35th Annual International Conference of the IEEE, pp. 1148–1151, IEEE, 2013.
[16] M. Nagao and T. Matsuyama, “Edge preserving smoothing,” Computer graphics and
image processing, vol. 9, no. 4, pp. 394–407, 1979.
[19] D. Tamilkudimagal and K. Kalpana, “Squeeze box filter for contrast enhancement in
ultrasound despeckling,” in Emerging Trends in Electrical and Computer Technology
(ICETECT), 2011 International Conference on, pp. 524–530, IEEE, 2011.
48
[22] L. Zhang, Y. Ren, C. Huang, and F. Liu, “A novel automatic tumor detection
for breast cancer ultrasound images,” in Fuzzy Systems and Knowledge Discovery
(FSKD), 2011 Eighth International Conference on, vol. 1, pp. 401–404, IEEE, 2011.
[25] B. S. Usha and S. Sandya, “Measurement of ovarian size and shape parameters,” in
Proc. Annual IEEE India Conference (INDICON), pp. 1–6, 2013.
[26] K. Riha and R. Benes, “Circle detection in pulsative medical video sequence,” in
Signal Processing (ICSP), 2010 IEEE 10th International Conference on, pp. 674–
677, IEEE, 2010.
[27] K. Řı́ha, J. Mašek, R. Burget, R. Beneš, and E. Závodná, “Novel method for lo-
calization of common carotid artery transverse section in ultrasound images using
modified viola-jones detector,” Ultrasound in medicine & biology, vol. 39, no. 10,
pp. 1887–1902, 2013.
[28] E. Cox, “A method of assigning numerical and percentage values to the degree of
roundness of sand grains,” Journal of Paleontology, vol. 1, no. 3, pp. 179–183, 1927.
[31] N. Aggarwal and R. Agrawal, “First and second order statistics features for clas-
sification of magnetic resonance brain images.,” Journal of Signal & Information
Processing, vol. 3, no. 2, 2012.
49
[32] J. R. Carr and F. P. De Miranda, “The semivariogram in comparison to the co-
occurrence matrix for classification of image texture,” Geoscience and Remote Sens-
ing, IEEE Transactions on, vol. 36, no. 6, pp. 1945–1952, 1998.
[35] D.-R. Chen, R.-F. Chang, C.-J. Chen, M.-F. Ho, S.-J. Kuo, S.-T. Chen, S.-J. Hung,
and W. K. Moon, “Classification of breast ultrasound images using fractal feature,”
Clinical imaging, vol. 29, no. 4, pp. 235–245, 2005.
[38] J. A. Richards and X. Jia, Remote sensing digital image analysis, vol. 3. Springer,
1999.
[41] Y.-L. Huang, K.-L. Wang, and D.-R. Chen, “Diagnosis of breast tumors with ultra-
sonic texture analysis using support vector machines,” Neural Computing & Appli-
cations, vol. 15, no. 2, pp. 164–169, 2006.
50
[43] S. Sun and R. Huang, “An adaptive k-nearest neighbor algorithm,” in Fuzzy Systems
and Knowledge Discovery (FSKD), 2010 Seventh International Conference on, vol. 1,
pp. 91–94, IEEE, 2010.
[44] R.-F. Chang, W.-J. Wu, W. K. Moon, Y.-H. Chou, and D.-R. Chen, “Support vector
machines for diagnosis of breast tumors on us images,” Academic radiology, vol. 10,
no. 2, pp. 189–197, 2003.
[45] R.-F. Chang, W.-J. Wu, W. K. Moon, and D.-R. Chen, “Automatic ultrasound
segmentation and morphology based diagnosis of solid breast tumors,” Breast Cancer
Research and Treatment, vol. 89, no. 2, pp. 179–185, 2005.
[49] S. M. Han, H. J. Lee, and J.-Y. Choi, “Prostate cancer detection using texture and
clinical features in ultrasound image,” in Information Acquisition, 2007. ICIA’07.
International Conference on, pp. 547–552, IEEE, 2007.
[52] C.-W. Hsu, C.-C. Chang, C.-J. Lin, et al., “A practical guide to support vector
classification,” 2003.
51
texture features and histogram moments,” in Biomedical Imaging: From Nano to
Macro, 2010 IEEE International Symposium on, pp. 288–291, IEEE, 2010.
[54] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM
Transactions on Intelligent Systems and Technology, vol. 2, pp. 27:1–27:27, 2011.
Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[55] Q. Gu and J. Han, “Clustered support vector machines,” in Proceedings of the Six-
teenth International Conference on Artificial Intelligence and Statistics, pp. 307–315,
2013.
[57] D. M. Tax and R. P. Duin, “Support vector data description,” Machine learning,
vol. 54, no. 1, pp. 45–66, 2004.
[58] W.-C. Chang, C.-P. Lee, and C.-J. Lin, “A revisit to support vector data description
(SVDD),” Last accessed : June 10, 2014.
[60] H. Godil, S. Davey, and R. Shekhar, “A novel algorithm for feature detection and
hiding from ultrasound images,” in Proceedings of the International Conference on
Bioinformatics, Computational Biology and Biomedical Informatics, p. 681, ACM,
2013.
[62] K. C. Jena, “Female foeticide in india: A serious challenge for the society,” Orissa
review, 2008.
[63] R. Patel, “The practice of sex selective abortion in india: may you be the mother of
a hundred sons,” Carolina Paper Series. Chapel Hill: Center for Global Initiatives,
University of North Carolina, vol. 20, 1996.
52
[64] “State census 2011.” http://www.census2011.co.in/states.php. Last accessed :
May 25, 2014.
[72] S. Tang and S. Chen, “A fast automatic recognition and location algorithm for fetal
genital organs in ultrasound images,” Journal of Zhejiang University SCIENCE B,
vol. 10, no. 9, pp. 648–658, 2009.
[73] I. Arel, D. C. Rose, and T. P. Karnowski, “Deep machine learning-a new frontier in
artificial intelligence research,” Computational Intelligence Magazine, IEEE, vol. 5,
no. 4, pp. 13–18, 2010.
[74] “The ultrasound scanner that plugs into a smartphone and could revolu-
tionise medical care in third world countries.” http://www.dailymail.co.uk/
53
sciencetech/article-2363964/The-ultrasound-scanner-plugs-SMARTPHONE-
revolutionise-medical-care-world-countries.html, July 15 2013.
[76] “Android phones sold more than iphones in April-June quarter, says
IDC.” http://businesstoday.intoday.in/story/android-phones-sold-more-
than-iphones-in-april-june-idc/1/197617.html, August 8 2013.
[77] G. Bradski, “The opencv library,” Doctor Dobbs Journal, vol. 25, no. 11, pp. 120–126,
2000.
54