0% found this document useful (0 votes)
2 views

CS-HMRF

This paper proposes a novel HMRF–CSA algorithm for brain MR image segmentation that combines the hidden Markov random field model with clonal selection and Markov chain Monte Carlo methods. The algorithm aims to improve segmentation accuracy by overcoming the limitations of traditional deterministic techniques, providing a robust solution for differentiating major brain structures. Experimental results demonstrate that the HMRF–CSA algorithm outperforms existing state-of-the-art methods in both simulated and clinical brain MR images.

Uploaded by

Tong Zhang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CS-HMRF

This paper proposes a novel HMRF–CSA algorithm for brain MR image segmentation that combines the hidden Markov random field model with clonal selection and Markov chain Monte Carlo methods. The algorithm aims to improve segmentation accuracy by overcoming the limitations of traditional deterministic techniques, providing a robust solution for differentiating major brain structures. Experimental results demonstrate that the HMRF–CSA algorithm outperforms existing state-of-the-art methods in both simulated and clinical brain MR images.

Uploaded by

Tong Zhang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Biomedical Signal Processing and Control 12 (2014) 10–18

Contents lists available at ScienceDirect

Biomedical Signal Processing and Control


journal homepage: www.elsevier.com/locate/bspc

Hidden Markov random field model based brain MR image


segmentation using clonal selection algorithm and Markov chain
Monte Carlo method
Tong Zhang a,∗ , Yong Xia a , David Dagan Feng a,b
a
Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, The University of Sydney,
NSW 2006, Australia
b
Med-X Research Institute, Shanghai JiaoTong University, Shanghai 200025, China

a r t i c l e i n f o a b s t r a c t

Article history: The hidden Markov random field (HMRF) model has been widely used in image segmentation, as it
Received 28 February 2013 provides a spatially constrained clustering scheme on two sets of random variables. However, in many
Received in revised form 30 July 2013 HMRF-based segmentation approaches, both the latent class labels and statistical parameters have been
Accepted 31 July 2013
estimated by deterministic techniques, which usually lead to local convergence and less accurate segmen-
Available online 26 September 2013
tation. In this paper, we incorporate the immune inspired clonal selection algorithm (CSA) and Markov
chain Monte Carlo (MCMC) method into HMRF model estimation, and thus propose the HMRF–CSA
Keywords:
algorithm for brain MR image segmentation. Our algorithm employs a three-step iterative process that
Image segmentation
Magnetic resonance imaging (MRI)
consists of MCMC-based class labels estimation, bias field correction and CSA-based statistical parameter
Clonal selection algorithm (CSA) estimation. Since both the MCMC and CSA are global optimization techniques, the proposed algorithm has
Markov random field (MRF) the potential to overcome the drawback of traditional HMRF-based segmentation approaches. We com-
Hidden Markov random field (HMRF) pared our algorithm to the state-of-the-art GA–EM algorithm, deformable cosegmentation algorithm,
Markov chain Monte Carlo (MCMC) the segmentation routines in the widely-used statistical parametric mapping (SPM) software package
and the FMRIB software library (FSL) on both simulated and clinical brain MR images. Our results show
that the proposed HMRF–CSA algorithm is robust to image artifacts and can differentiate major brain
structures more accurately than other three algorithms.
© 2013 Elsevier Ltd. All rights reserved.

1. Introduction plays an essential role in a wide spectrum of clinical and research


applications. Since the manual segmentation performed by medical
Magnetic resonance (MR) imaging as a structural imaging professionals is time-consuming, expensive and subject to operator
modality is widely used in clinical practice and neuroscience variability, automated segmentation algorithms have been devel-
research. Comparing to the functional positron emission tomogra- oped in the literature [5–15]. Among them, statistical model based
phy (PET), it offers higher spatial resolution and better soft-tissue approaches have attracted numerous research attentions.
contrast. The recently developed PET/MR scanner combines both A widely used statistical model in neuroscience is the Gaussian
imaging techniques, and thus enable the simultaneous acquisition mixture model (GMM) [5–8], which assumes brain voxel values
of the functional and structural information in a single scanning are sampled independently from one of K Gaussian distributions
session [1,2]. The complementary information embedded in the with a prior probability. The GMM is not only consistent with the
co-aligned MR and PET data can be analyzed more efficiently. The piecewise constant nature of brain MR images, but also has com-
novel dual-modality PET/MR imaging technique hence excites an putational advantages. Particularly, the segmentation algorithm
increasing number of research opportunities [1,3,4]. Specifically, in the widely-used statistical parametric mapping (SPM) software
to guide the functional investigations, the traditionally popular package [16], namely unified segmentation [12] is based on this
MR image research areas, such as MR image segmentation, has method but combined with a nonlinear registration process. Model
been revitalized [4]. Segmentation of brain MR images into gray parameters of the GMM can be estimated according the maximum
matter (GM), white matter (WM), and cerebrospinal fluid (CSF) likelihood (ML) criterion using the expectation-maximization (EM)
algorithm [17]. However, the EM-based ML estimation has several
drawbacks, including over-fitting and being apt to be trapped in
∗ Corresponding author. Tel.: +61 2 9351 3702. a local optima. To overcome such drawbacks, several global opti-
E-mail address: tong@it.usyd.edu.au (T. Zhang). mization techniques have been used to replace the EM algorithm.

1746-8094/$ – see front matter © 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.bspc.2013.07.010
T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18 11

For instance, Tohka et al. [6] incorporated the genetic algorithm into segmentation result is obtained from MCMC inference, the image
the likelihood estimation, and thus proposed the GA–EM algorithm, inhomogeneity is estimated and partly removed from the image.
which can produce satisfying segmentation results. Alternatively, The proposed HMRF–CSA algorithm has been compared to the
maximum a posteriori (MAP) estimation is a common substitution state-of-the-art GA–EM method [37], deformable cosegmentation
for the ML estimation when prior knowledge is available. The prior (D–C) algorithm [11], the unified segmentation routine in the SPM
of voxels’ spatial dependence is usually interpreted by modeling the software package [16] and the HMRF–EM segmentation routine
voxel class labels as a Markov random field (MRF) [18–21]. Thus, in the FMRIB Software Library (FSL) [38] on both simulated and
the image segmentation problem can be approached within the clinical brain MR images.
MAP–MRF framework. A classical solution is the iterated condi-
tional modes (ICM) [8,15,22–24]. Despite being computationally 2. Related work
efficient, the ICM algorithm has an evident drawback of locally
convergences. Thus, global stochastic searching based inference 2.1. Image inhomogeneity model
methods, such as the Markov chain Mote Carlo (MCMC) inference
[25], have been extensively studied to substitute for the determi- One of the inherent challenges for MR image analysis is the
nistic routines [26,27]. existence of image inhomogeneity, also known as the bias field
The posterior criterion is to approximate the optimal class labels or intensity non-uniformity (INU) [39–44]. Image inhomogeneity
given the model parameters and the observed image. However, it is referred as a smooth intensity variation across the image, aris-
fails to examine whether the model parameters fit the observed ing from the imperfection of image acquisition [45]. Let a brain
image data. The hidden Markov random field (HMRF) model can MR image be denoted by y = {yi ; i = 1, 2, . . ., N}, where yi repre-
naturally characterize this important penalty, and hence better sents the intensity at voxel i and N is the number of voxels. The
suits the image segmentation problem [19,28]. In this model, both unknown bias field B = {bi ; i = 1, 2, . . ., N} is usually modeled as a
the observed image and hidden voxel labels are assumed to be multiplicative component of y, shown as follows
random variables. Image segmentation is represented as the joint
approximation of the optimal hidden variables and model param- y = B · ŷ + ℵ (1)
eters conditioned on the observed data. The HMRF model was where ŷ is the ideal image, and ℵ∼N(o, n2 ) is the additive Gaussian
introduced into the brain MR image segmentation field by Zhang white noise. Since the bias field B varies slowly in the image, it
et al. [15]. In their HMRF–EM framework, they use a two-step iter- can be defined as a smooth function on the entire image domain.
ative procedure to solve the joint probability. The ICM algorithm We adopt orthogonal polynomials {Wj : j = 1, 2, . . ., NOP } as basis
is first used to approximate the hidden class labels, and then the functions to approximate the bias field [46,47]
EM algorithm is used to estimate the model parameters. Despite
being popular and computationally efficient [29,30], both the ICM 
NOP

and EM algorithms used in this approach are deterministic search- B= ϕj Wj (2)


ing technique, which tend to converge at local optima and is highly j=1
dependent on initializations. An intuitively solution is to replace where ϕ = {ϕj : j = 1, 2, . . ., NOP } denotes combination coefficients,
the deterministic algorithms with the stochastic searching tech- NOP = (D + 1)(D + 2)/2 is the number of polynomials, and D is the
niques. It has been reported that the segmentation performance degree of polynomials.
can be improved by using the MCMC technique to approximate
class labels [9,31–33] or using evolutionary algorithms to replace
2.2. Statistical models
the EM algorithm for parameter estimation [10,14].
Recently, immune inspired algorithms have drawn significant
Suppose the voxel intensities y = {yj ; j = 1, 2, . . ., N} follow the
attention, due to their highly evolved, parallel and distributed
GMM. Each intensity value yj is sampled independently from the
nature. They provide a potential source of new evolutionary algo-
Gaussian distribution N(k , ˙k ) with the prior probability k . The
rithms for solving complex optimization problems. Among them,
likelihood of the observed image can be calculated as
the clonal selection algorithm (CSA) has been demonstrated to be
N K
effective in several research fields [34]. It is based on the clonal L(y; , , ˙) = ˘K N(yj ; k , ˙k ) (3)
selection theory, claiming that, only those antibodies that recognize j=1 k=1
the antigens will be selected to proliferate, and the proliferated cells The optimal GMM parameters can be estimated by maximize the
will improve their affinity to the antigens through an affinity mat- above likelihood function. Once those parameters are determined,
uration process [35,36]. The CSA mimics the nature of an immune the brain image segmentation problem can be solved by classifying
response to an antigenic stimulus, and has the potential to achieve each voxel with a Bayes classifier [7].
global optima. In our preliminary study, we have used the CSA to To incorporate the spatial constraint into this model, we use
learn the parameters of the HMRF model, and proposed the eHMRF the MRF to model class labels x = {xj ; j = 1, 2, . . ., N}. According
algorithm [10] for brain MR image segmentation. to Hammersley-Clifford theorem, the prior joint distribution of
In this paper, we incorporate both the CSA and MCMC tech- class labels p(x) follows a Gibbs distribution. With the well-known
niques into the HMRF model estimation, and thus propose the MAP–MRF framework, the image segmentation is equivalent to
HMRF–CSA algorithm for brain MR image segmentation. The finding the optimal configuration x* by maximizing its posterior
observed brain image data and underlying voxel labels are mod- probability
eled by the HMRF. The segmentation is iteratively accomplished
using a joint estimation of the class labels and model parameters. x∗ = arg max(p(x|y; )) = arg max(p(y|x; )p(x)) (4)
x x
First, the optimal label configuration is approximated by the MCMC
method. Then, the implicit HMRF model parameters are estimated where  = {k , ˙k ; k = 1, 2, . . ., K} denotes the model parame-
by the CSA. We replace all deterministic search routines with global ters, p(y|x; ) is the image likelihood, and p(x) is the spatial prior.
stochastic optimization techniques, with the aim of improving the Instead of treating the image intensities y as the observation
robustness of the segmentation algorithm. Meanwhile, the image evidence, we can consider it as being modeled by another random
inhomogeneity in MR images is modeled as a multiplicative com- field defined on the identical image lattice. Then the MRF x repre-
ponent of the piecewise constant image. Once the intermediate senting the latent class labels becomes the HMRF. In this model,
12 T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18

the image segmentation problem is formulated as maximizing the where i = 1, 2, . . ., I denote iteration number of the MCMC algo-
joint probability of the configuration x and the parameters  rithm, C is a cooling factor, here we empirically set T (0) = 4, C =
0.97. Given a brain MR image y and an initialization configura-
(x∗ , ∗ ) = arg max p(x, |y) = arg max p(x|y, )p(|y, x) (5) tion of labels x(0) , the model parameters can be calculated. Define
x x
the jumping density Q(·|x(i) ) that can suggest a random move from
It should be noted that the difference between the posterior x(i) as the Gaussian distribution. At each iteration, draw a candi-
in Eq. (4) and that the joint probability in Eq. (5) is the penalty date x*(i+1) from the proposal density Q (x∗(i+1) |x(i) ), and a random
term p(|y, x), which examines whether the model parameters are sequence u = {uj , j ∈ S} from the uniform distribution u(0, 1), calcu-
consistent with the observations given the configuration x. late the acceptance ratio for each voxel j
 ∗(i+1)
3. HMRF–CSA algorithm p(xj |y (t) , (t)
˛j = min 1, (11)
(i)
p(xj |y (t) , (t)
The HMRF model aims to jointly estimate the optimal class
labels and model parameters (x,) in Eq. (5). The estimation process (i+1) ∗(i+1)
can be divided into two mutually dependent optimization steps: If uj < ˛j , accept the simulation xj = xj , else reject it and
(i+1)
searching the optimal configuration x* and learning the fittest keep the class label the same as that of the previous iteration xj =
model parameters * . When bias field correction is considered, (i)
xj .
Keep the iteration until reach the maximum iteration number.
the HMRF model estimation can be achieved using the following
three-step iterated procedure Our implementation is summarized in Algorithm I.
⎧ (t+1) Algorithm I: MCMC sampling for voxel classification
⎪ x = arg max p(x|y (t) , (t) )
⎨ x Input: Observed brain MR image y,
y (t+1) = f (y (t) , x(t+1) ) (6) Output: Optimal voxel class labels x*

⎩ Initialization: segmentation result x(0) or parameters (0)
(t+1) = arg max p(|y (t+1) , x(t+1) ) For iteration i = 1, 2, · · ·, I do
1) Draw a candidate x∗(i+1) from the proposal density Q (x∗(i+1) |x(i) ), and a
where f(·,·) denotes the function that can correct the bias field random sequence u ∈ u(0, 1)
based on the observation y and the segmentation result x, and 2) For each voxel j ∈ S, calculate the acceptance ratio ˛j based on Eq. (11)
∗(i+1)
t ∈ {1, 2, . . .TMax } denotes the current iteration number. At each 3) If uj < ˛j , set xj
(i+1)
= xj . Else, set xj
(i+1) (i)
= xj
iteration, we first utilize the MCMC method to realize the MRF–MAP 4) Update temperature parameter T (i+1)
using Eq. (10)
estimation. With the approximated segmentation results, the bias End For.
field is then estimated. After bias field correction, the CSA is used
to learn the HMRF model parameters. The iteration will stop when
3.2. Bias field correction
reaching the maximum iteration number or the segmentation
result becomes stable.
After the MCMC-based voxel classification, the intermediate
segmentation result x* and minimum energy E = {Ekj,xj = k, j ∈ S}
3.1. MCMC sampling for voxel classification
are obtained. Let us consider the normalized posteriori probability
H = {Hkj ; j = 1, 2, . . .N, k = 1, 2, . . .K} as a soft segmentation result
The first step is to find the optimal configuration x* by MRF–MAP
approximation. We use the MCMC method to solve this opti- exp(−Ekj )
mization problem. According to Eq. (4), given any particular Hkj = p(xj |yj , k ) = K
(12)
configuration x, if we assume yj is mutually independent and k=1
exp(−Ekj )
based on a multivariate Gaussian distribution with parameters
Based on the piecewise constant nature of the ideal MR image,
k = {k , ˙k }, the likelihood is
we define the restored image is as the product of the soft segmen-

p(y|x; ) = N(yj ; k ) (7) tation and the corresponding mean k
j ∈ S,xj =k

K
The joint distribution of the MRF x can be expressed as a Gibbs ŷj = Hkj ∗ k (13)
function [19],
k=1

1  VC (x) The bias field can be estimated by solving the following least-
p(x)  exp − (8)
Z T square fitting problem using the singular value decomposition
c∈C
(SVD) based technique [48].
where Z is a normalizing constant, VC (x) denotes the potential of
clique c, C is the assembly of all cliques determined by the neigh- 
NOP

borhood system, and T is a temperature parameter. We use the ϕ(t) = arg minϕ ||y./ŷ(t) − ϕj Wj ||2 (14)
Potts model to represent the clique potential. Applying Eq. (7) and j=1
(8) to Eq. (4) and applying the negative logarithmic transformation
to it, we have where “./” represents the point-by-point division. With the esti-
mated combination coefficients, the bias field is obtained as
x∗ = arg minx ∈ X {E(x)} (9)

NOP
(t)
where E(x) = V (x)/T −
c∈C C j ∈ S,x =k
ln(N(yj ; k )) is the energy B(t) = ϕj W j (15)
j
function. j=1
Following the simulated annealing MCMC methods [19], we
define a cooling schedule for temperature parameter T, Then, the bias field corrupted image can be restored as follows

T (i+1) = T (i) × Ci (10) y(t+1) = f (y (t) |x(t) ) = exp(ln y(t) − ln B(t) ) (16)
T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18 13

3.3. CSA for parameter estimation Input:


To-be-segmented images
The third step is to learn the optimal parameters given the cur-
rent image intensities y(t) and configuration x(t) by maximizing the
MAP Estimation via
posterior probability p(|y (t) , x(t) ) MCMC Sampling


N 
K

p(|y, x) ∝ ∼p(y|x; )p() = N(y j |xj = k; k )p(kj ) (17) Bias Field Estimation and
Image Correction
j=1 k=1

where p() is the prior probability of parameters. This prior HMRF Model Estimation
denotes the voxel based Markov property information p(kj ) = Based on CSA
p(xj = k|k∂j ), which can be calculated by the MRF energy. We define
it as the mixture of these terms to balance both the convergence No
and diversity of the parameters, for each voxel j ∈ S Stop?

p(k ) = ϑp (k|x∂j ) + (1 − ϑ)˘kj (18) Yes


Output:
where ϑ is a balance constant, ˘kj = k denotes the voxel wise Bias field corrected image
HMRF Parameters
global prior. Given any particular parameter set , the object
Segmentation results
function of this optimization problem shown in Eq. (18) can be
calculated. To achieve the global optima, we employ the CSA [35]
Fig. 1. Flow chart of the HMRF–CSA algorithm.
to solve this optimization problem by simulating all the possible
parameters in a population manner. The CSA is an evolutionary
optimization algorithm that is capable of finding the global opti- 3.4. Summary
mal solutions by iteratively generating a population of real-coded
antibodies. In this paper, the population of antibodies np is set to Given an initial segmentation results generated by the K-
100 to reach the balance between satisfactory optimization results means algorithm, the HMRF–CSA algorithm iteratively performs
and reasonable computation duration. Each antibody is defined as the MCMC-based voxel classification, bias field correction and CSA-
a candidate parameter set . The affinity of each antibody k with based model parameters estimation until the algorithm converges.
the specific antigen is defined as the posterior likelihood p(|y, x). Once a convergence is reached, the finial segmentation result, bias
The iterative optimization process consists of following six major field and model parameters are obtained jointly. The major steps
steps. of the HMRF–CSA algorithm are summarized in Algorithm II.
Step 1: Evaluate the affinity of each antibody and sort all anti-
bodies in descending order according to their affinities. Algorithm II: HMRF–CSA brain image segmentation algorithm
Step 2: Select NS best antibodies from the current population Input: Observed brain MR image y,
and clone them to form a population of clones. For the selected Output: Optimal voxel class labels x* , model parameters * , and the INU
antibody with the j-th highest affinity, the number of its clones is B
Initialization: Segmentation x0 , random population of parameter
defined to be proportional to its affinity ranking, shown as follows antibodies  and the INU B(0) = {1, 1, . . ., 1}
For iteration t=1,2,...,Tmax do
ˇ · Np A. Produce a new generation of antibodies
Ncj = round (19) A1. Evaluate the affinity of each antibody using Eq. (17);
j
A2. Select NS best individuals and reproduce (clone) them;
A3. Apply the hypermutation and receptor editing to those cloned
where ˇ is a constant that controls the cloning ratio, and round(·) antibodies;
converts a real value to its nearest integer. A4. Evaluate the affinity of newly produced antibodies;
Step 3: Apply the hypermutation and receptor editing opera- A5. Use top-ranking new antibodies to replace 40% of the memory sells
set and the entire remaining set;
tions to the cloned population with the probability of phm and pre ,
A6. Replace the worst 10% antibodies in the remaining set randomly
respectively. Hypermutation is defined as randomly changing the generated ones;
values of current antibody within ±10% of its dynamic range, with B. MCMC approximation for the optimal voxel labels
the aim of searching optimal solutions locally. Receptor editing is B1. Identify the sub-optimal model parameters encoded by the best
defined as randomly changing the antibody within ±100% of its antibody;
B2. Update each voxel’s class label xj∗ based on Algorithm I;
dynamic range to enable global search.
C. Correct the image and update the INU
Step 4: Evaluate the affinities of the antibodies in the cloned C1. Estimate each voxel’s posterior probability using Eq. (12);
population and sort them according to their affinities in a descen- C2. Construct the ideal image y(t) using Eq. (13);
ding order. C3. Estimate the bias field via solving the least-square fitting problem in
Step 5: Select top-ranking antibodies in the cloned population Eq. (14);
C4. Update the bias field as B(t) using Eq. (15);
to replace 40% antibodies with lowest affinities in the memory C5. Correct the image y(t) using Eq. (16)
cells set, and use other top-ranking antibodies to replace the entire End For.
remaining set. The memory cells se t ensures the preservation of
the so-far obtained optimal solution such that the highest affinity
increases monotonously over generations. 4. Experimental results
Step 6: Replace 10% antibodies with lowest affinities in the
remaining set with randomly generated antibodies to introduce 4.1. Results on simulated data
diversity into the new population.
This process is iteratively repeated until a stopping criterion, We first compared the proposed HMRF–CSA algorithm to our
such as the maximum iteration number, is met (Fig. 1). previously proposed eHMRF algorithm [10], the GA–EM algorithm
14 T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18

Fig. 2. Comparison between the segmentation results of four algorithms in simulated brain MR images: (a) 88th transverse slice in the simulated study (with 7% noise and
40% INU); (b) INU corrected image; (c) estimated INU; (d) result of the HMRF–EM algorithm; (e) result of the D–C algorithm; (f) result of the SPM package; (g) result of the
GA–EM algorithm; (h) result of the eHMRF algorithm; (i) result of the proposed HMRF–CSA Algorithm; (j) ground truth.

[6] that is available in the GAMIXTURE package [37], the D–C algo- algorithms were depicted in Fig. 4. It should be noted that the pro-
rithm [11], the classic HMRF–EM algorithm in FSL package, and posed algorithm can remain the good segmentation performance
the unified segmentation routine [12] in the SPM package [16] on with the high noise and INU levels.
simulated T1-weighted brain MR studies obtained from the Brain-
Web dataset [49]. The BrainWeb dataset provides a set of simulated
4.2. Results on clinical data
brain images, which are simulated by an anatomic model with dif-
ferent INU and noise levels. Each simulated study has a dimension
Next, segmentation experiments were performed on 18 clin-
of 181 × 217 × 181 and a voxel size of mm × 1 mm × 1 mm.
ical brain MR studies obtained from the IBSR (Version 2.0) [51].
To demonstrate the performance of the proposed algorithm,
Each study has been spatially normalized into the Talairach ori-
Fig. 2 shows the 88th transverse slice of the simulated study with
entation, and resliced into a dimension of 256 × 256 × 128 voxels
40% INU and 7% noise, bias field corrected image, estimated bias
and a voxel size of 1.0 mm × 1.0 mm × 1.5 mm. The ground truth
field, segmentation results obtained by using four algorithms, and
tissue maps were generated by trained investigators using both
the ground truth tissue map. It reveals that the segmentation result
image histograms and a semi-automated intensity contour map-
produced by the proposed algorithm is more similar to the ground
ping algorithm. Those images have also been processed by the CMA
truth than the results produced by other algorithms.
“autoseg” INU correction routines [51].
Next, we further compared these algorithms in two groups of
The HMRF–CSA algorithm was compared to the state-of-the-art
simulated MR image studies. The first group consists of four studies
GA–EM algorithm [6], D–C algorithm [11], and the brain MR image
with 20% INU and different levels of noise, ranging from 1% to 7%.
segmentation routines in the SPM [16] and the FSL [52] packages,
Since the ground truth tissue map is available, the delineation of
which are very widely used in the neuroimaging community. In
each brain tissue type is assessed quantitatively by using the Dice
each algorithm, we used the default parameters suggested in the
similarity coefficient (DSC) [50].
corresponding software package. The coronal, sagittal and trans-
|Vs (k) ∩ Vg (k)| verse slices of the study “IBSR 14”, corresponding ground truth and
D(Vs (k), Vg (k)) = 2 × (20) voxel classification results obtained by using five algorithms were
|Vs (k)| + |Vg (k)|
shown in Fig. 5. The average accuracy was compared in Table 1.
where Vs (k) is the volume of brain tissue class k in the segmenta- The accuracy of these five algorithms in all studies was measured
tion result, Vg (k) is the corresponding volume in the ground truth, and displayed in Fig. 6. It reveals that, compared to other four
and |V| represents the number of voxels in volume V. The overall widely used approaches, the proposed algorithm can substantially
performance was evaluated by using the segmentation accuracy, improve the accuracy of voxel classification in clinical brain MR
which was calculated as the percentage of correctly classified brain images, especially improving the accuracy of GM and CSF delin-
voxels. eation.
Fig. 3 depicts the segmentation accuracy obtained by those six
algorithms. It shows that the proposed algorithm archives better
accuracy in delineating each brain tissue and in classifying the over- Table 1
all brain volume in most studies. Moreover, with the increasing Average voxel classification accuracy of four algorithms on 18 clinical T1-weighted
brain MR images.
noise or INU levels, the accuracy of the proposed algorithm declines
less than the accuracy of the other three algorithms. It demonstrates FSL D–C algorithm SPM GA–EM Proposed
that the proposed algorithm has an improved ability to resist the Overall 75.06% 75.02% 81.20% 74.97% 82.95%
impact of noise and INU. The second test group consists of four GM 77.35% 73.80% 84.42% 77.90% 84.92%
studies with 40% INU and different levels of noise, ranging from 1% WM 87.08% 88.41% 87.38% 87.23% 83.88%
CSF 16.19% 33.04% 20.31% 14.90% 55.45%
to 7%. Similarly, the segmentation accuracy obtain by those four
T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18 15

Fig. 3. Comparison of the segmentation accuracy of six algorithms in T1-weighted brain MR images with 20% INU and 1–7% noises.

Fig. 4. Comparison of the segmentation accuracy of five algorithms in T1-weighted brain MR images with 40% INU and 1–7% noises.

5. Discussion the compromise between approximation accuracy and computa-


tional complexity. Since the INU varies very slowly, 10 third-order
5.1. Parameter settings polynomials are usually good enough for INU approximation. The
weighting parameter ϑ given in Eq. (19) determines how much
In the proposed HMRF–CSA algorithm, there are three groups of contribution the MRF prior or the intermediate MCMC segmenta-
parameters that need approximation, including MCMC inference, tion result can make to the parameter estimation process. A large
INU estimation, and CSA-based parameter approximation. In INU ϑ enables the MRF prior to play a more important role, on the other
approximation, the order of orthogonal polynomials represents hand, a small ϑ supports more on the GMM prior. To evaluate the
16 T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18

Fig. 5. Comparison of the image segmentation results of five algorithms in clinical brain MR images: (a) T1-weighted brain MR study from IBSR (IBSR 14); (b) result of the
SPM package; (c) result of the FSL package; (d) result of the D–C method; (e) result of the GA–EM algorithm; (f) result of the proposed HMRF–CSA algorithm; (g) ground truth
tissue map.

90%
Overall Accuracy

85%
80% D-C Algorithm
75% FSL
70% GA-EM
65% HMRF-CSA

60% SPM

90%
85%
DSC of GM

80% D-C Method


75% FSL
70% GA-EM
65% HMRF-CSA
60% SPM

95%

90%
DSC of WM

D-C Method
85%
FSL
80%
GA-EM
75% HMRF-CSA

70% SPM

90%
80%
70%
DSC of CSF

60% D-C Method


50%
FSL
40%
30% GA-EM
20% HMRF-CSA
10%
0% SPM

Fig. 6. Comparison of the voxel classification accuracy of five algorithms on 18 clinical T1-weighted brain MR images.
T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18 17

algorithm has been demonstrated to be capable of solving the


HMRF model estimation problem for image segmentation pur-
poses. Our comparative experiments in both simulated and
clinical brain MR studies show that the proposed algorithm can
achieve better segmentation accuracy in average than the GA–EM
algorithm, D–C algorithm, and the segmentation routines in the
popular SPM and FSL packages.

Acknowledgements

Fig. 7. Variation of the segmentation accuracy over the weighting parameter ϑ in


This work was supported by the Australian Research Council
the clinical brain MR study “IBSR 03”.
(ARC) grants. The simulated brain MR data sets and the ground
truth were provided by the McConnell Brain Imaging Center
impact of this parameter on the segmentation accuracy, we applied of the Montreal Neurological Institute at the McGill University
our algorithm with different ϑ to 110th transverse slice of the clin- and are available at: http://mouldy.bic.mni.mcgill.ca/brainweb/.
ical study “IBSR 03”. The variation of the obtained segmentation The clinical MR brain data sets and their manual segmenta-
accuracy over the values of ϑ was depicted in Fig. 6. The segmen- tions were provided by the Center for Morphometric Anal-
tation accuracy achieves the highest value when ϑ = 0.5. Hence, ysis at Massachusetts General Hospital and are available at:
we empirically set ϑ = 0.5. It should be noted that, the results may http://www.cma.mgh.harvard.edu/ibsr/.
vary with the different images.
Like other evolutionary techniques, the CSA intrinsically uses
References
many parameters, which have been comprehensively discussed
[35]. In this study, we empirically set the parameters used in the [1] D.H. Paulus, H. Braun, B. Aklan, H.H. Quick, Simultaneous PET/MR imaging:
CSA procedure as follows: population size Np = 100, memory set MR-based attenuation correction of local radiofrequency surface coils, Medical
size Nm = 0.3Np , number of selected antibodies Ns = 0.5Np , cloning Physics 39 (2012) 4306–4315.
[2] G. Delso, S. Fürst, B. Jakoby, R. Ladebeck, C. Ganter, S.G. Nekolla, M. Schwaiger,
ratio constant ˇ = 0.5, hypermutation probability phm = 0.8, recep- S.I. Ziegler, Performance measurements of the Siemens mMR integrated whole-
tor editing probability pre = 0.1 and maximum generation Nt = 20 body PET/MR scanner, Journal of Nuclear Medicine 52 (2011) 1914–1922.
(Fig. 7). [3] S.H. Keller, S. Holm, A.E. Hansen, B. Sattler, F. Andersen, T.L. Klausen, L. Højgaard,
A. Kjær, T. Beyer, Image artifacts from MR-based attenuation correction in clin-
ical, whole-body PET/MRI, Magnetic Resonance Materials in Physics, Biology
5.2. Computational complexity and Medicine 26 (1) (2012) 173–181.
[4] D. Gutierrez, M.-L. Montandon, F. Assal, M. Allaoua, O. Ratib, K.-O. Lövblad,
H. Zaidi, Anatomically guided voxel-based partial volume effect correction in
The performance of a computer program depends on many
brain PET: impact of MRI segmentation, Computerized Medical Imaging and
factors, including the computer’s capability, data representa- Graphics 36 (2012) 610–619.
tion, programming language, and code implementation. In this [5] G. Tian, Y. Xia, Y. Zhang, D. Feng, Hybrid genetic and variational expectation-
section, we evaluate the general computational complexity of maximization algorithm for Gaussian-mixture-model-based brain MR image
segmentation, IEEE Transactions on Information Technology in Biomedicine
the HMRF–CSA algorithm. As mentioned before, the proposed 15 (2011) 373–380.
algorithm sequentially performs the MCMC inference, bias field [6] J. Tohka, E. Krestyannikov, I.D. Dinov, A.M. Graham, D.W. Shattuck, U. Ruot-
estimation and CSA based parameter estimation in each iteration. salainen, A.W. Toga, Genetic algorithms for finite mixture model based voxel
classification in neuroimaging, IEEE Transactions on Medical Imaging 26 (2007)
Given an image with N voxels, the computational complexity of 696–711.
the MCMC inference is O(N). In our implementation, the bias field [7] T. Zhang, Y. Xia, D.D. Feng, Clonal selection algorithm for gaussian mixture
estimation only requires a few matrix calculations. According to model based segmentation of 3d brain MR images, in: Intelligent Science and
Intelligent Data Engineering, Springer–Verlag, Berlin Heidelberg, 2012, pp. 295-
the Ref. [35], the complexity of the CSA based parameter esti- 302.
mation is O(Np +Nc K), where Np denotes the population size, and [8] K. Van Leemput, F. Maes, D. Vandermeulen, P. Suetens, Automated model-based
Nc denotes the total cloned number. Considering the proposed tissue classification of MR images of the brain, IEEE Transactions on Medical
Imaging 18 (1999) 897–908.
iterative segmentation algorithm stops after no more than wmax [9] D. Feng, L. Tierney, V. Magnotta, MRI tissue classification using high-resolution
iterations, it still have a linear overall computational complexity Bayesian hidden Markov normal mixture models, Journal of the American Sta-
O(N+Np +Nc K). It should be noted that a major disadvantage of the tistical Association 107 (2012) 102–119.
[10] T. Zhang, Y. Xia, d. Feng, An evolutionary HMRF approach to brain mr image
MCMC method is requiring a large number of simulation draws.
segmentation using clonal selection algorithm, in: 8th IFAC Symposium on
However, since a good starting state of the MCMC method can be Biological and Medical Systems, Budapest, Hungary, 2012.
given by the CSA, our algorithm does not need a lot of simulation [11] T. Zhang, Y. Xia, D.D. Feng, A deformable cosegmentation algorithm for brain MR
draws. Similarly, the output of the MCMC method enables the CSA images, in: Engineering in Medicine and Biology Society (EMBC), 2012 Annual
International Conference of the IEEE, 2012, pp. 3215-3218.
to start with a good initialization and to mature after limited gen- [12] J. Ashburner, K.J. Friston, Unified segmentation, NeuroImage 26 (2005)
erations. Therefore, although our HMRF–CSA algorithm involves 839–851.
the time-consuming MCMC and CSA procedures, its computational [13] M. Ibrahim, N. John, M. Kabuka, A. Younis, Hidden Markov models-based 3D
MRI brain segmentation, Image and Vision Computing 24 (2006) 1065–1079.
complexity is merely slightly higher than traditional segmentation [14] J. Tohka, I.D. Dinov, D.W. Shattuck, A.W. Toga, Brain MRI tissue classification
approaches. Generally, the trade-off between the segmentation based on local Markov random fields, Magnetic Resonance Imaging 28 (2010)
accuracy and computational time cost is to be considered for the 557–573.
[15] Y. Zhang, M. Brady, S. Smith, Segmentation of brain MR images through a hidden
selection of the number of simulation draws in MCMC, maximum Markov random field model and the expectation-maximization algorithm, IEEE
generations of CSA and maximum number of iterations of the pro- Transactions on Medical Imaging 20 (2001) 45–57.
posed algorithm. [16] Statistical Parameter Mapping, in http://www.fil.ion.ucl.ac.uk/spm/software/
spm8/
[17] C.M. Bishop, Pattern Recognition and Machine Learning, Springer, New York,
6. Conclusion 2006.
[18] H. Rue, L. Held, Gaussian Markov Random Fields: Theory and Applications,
Chapman & Hall/CRC, Boca Raton, FL, USA, 2005.
This paper proposes the HMRF–CSA algorithm for brain MR [19] S. Geman, D. Geman, Stochastic relaxation, Gibbs distributions, and the
image segmentation, which incorporates the CSA and MCMC Bayesian restoration of images, IEEE Transactions on Pattern Analysis and
into the HMRF model based segmentation practice. The proposed Machine Intelligence PAMI-6 (1984) 721–741.
18 T. Zhang et al. / Biomedical Signal Processing and Control 12 (2014) 10–18

[20] Y. Xia, S. Eberl, L. Wen, M. Fulham, D.D. Feng, Dual-modality brain PET-CT image [36] L.N.D. Castro, F.J.V. Zuben, Clonal selection algorithm with engineering appli-
segmentation based on adaptive use of functional and anatomical information, cations, in: GECCO 2000, Workshop on Artificial Immune Systems and their
Computerized Medical Imaging and Graphics 36 (2012) 47–53. Applications, Las Vegas, USA, 2000, pp. 36–37.
[21] Y. Xia, D. Feng, R. Zhao, Adaptive segmentation of textured images by using the [37] J. Tohka, MIXTUREGA, in http://www.1.cs.tut.fi/∼jupeto/gamixture.html.
coupled Markov random field model, IEEE Transactions on Image Processing [38] FMRIB Software Library, in http://fsl.fmrib.ox.ac.uk/fsl/.
15 (2006) 3559–3566. [39] M. Styner, Parametric estimate of intensity inhomogeneities applied to MRI,
[22] J. Besag, On the statistical analysis of dirty pictures, Journal of the Royal Statis- IEEE Transactions on Medical Imaging 19 (2000) 153–165.
tical Society. Series B (Methodological) 48 (3) (1986) 259–302. [40] M.N. Ahmed, S.M. Yamany, N. Mohamed, A.A. Farag, T. Moriarty, A modified
[23] B. Scherrer, M. Dojat, F. Forbes, C. Garbay, Agentification of Markov model- fuzzy c-means algorithm for bias field estimation and segmentation of MRI
based segmentation: application to magnetic resonance brain scans, Artificial data, IEEE Transactions on Medical Imaging 21 (2002) 193–199.
Intelligence in Medicine 46 (2009) 81–95. [41] K. Van Leemput, F. Maes, D. Vandermeulen, P. Suetens, Automated model-based
[24] Z. Liang, S. Wang, An EM approach to MAP solution of segmenting tissue mix- bias field correction of MR images of the brain, IEEE Transactions on Medical
tures: a numerical analysis, IEEE Transactions on Medical Imaging 28 (2009) Imaging 18 (1999) 885–896.
297–310. [42] J.D. Gispert, S. Reig, J. Pascau, J.J. Vaquero, P. García-Barreno, M. Desco, Method
[25] J. Besag, C. Kooperberg, On conditional and intrinsic autoregressions, for bias field correction of brain T1-weighted magnetic images minimizing
Biometrika 82 (1995) 733–746. segmentation error, Human Brain Mapping 22 (2004) 133–144.
[26] Z. Tu, S.C. Zhu, Image segmentation by data-driven Markov chain Monte Carlo, [43] L. Chunming, H. Rui, D. Zhaohua, J.C. Gatenby, D.N. Metaxas, J.C. Gore, A level set
IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (2002) method for image segmentation in the presence of intensity inhomogeneities
657–673. with application to MRI, IEEE Transactions on Image Processing 20 (2011)
[27] K. Kayabol, E.E. Kuruoglu, B. Sankur, Bayesian separation of images modeled 2007–2016.
with MRFs using MCMC, IEEE Transactions on Image Processing 18 (2009) [44] E.B. Lewis, N.C. Fox, Correction of differential intensity inhomogeneity in lon-
982–994. gitudinal MR images, NeuroImage 23 (2004) 75–83.
[28] H. Kunsch, S. Geman, A. Kehagias, Hidden Markov random fields, The Annals of [45] U. Vovk, F. Pernuš, B. Likar, A review of methods for correction of inten-
Applied Probability 5 (1995) 577–602. sity inhomogeneity in MRI, IEEE Transactions on Medical Imaging 26 (2007)
[29] S. M’hiri, L. Cammoun, F. Ghorbel, Speeding up HMRF EM algorithms for fast 405–421.
unsupervised image segmentation by Bootstrap resampling: application to the [46] Z. Ji, Q. Sun, Y. Xia, Q. Chen, D. Xia, D. Feng, Generalized rough fuzzy c-means
brain tissue segmentation, Signal Processing 87 (2007) 2544–2559. algorithm for brain MR image segmentation, Computer Methods and Programs
[30] J. Nie, Z. Xue, T. Liu, G.S. Young, K. Setayesh, L. Guo, S.T.C. Wong, Auto- in Biomedicine 108 (2012) 644–655.
mated brain tumor segmentation using spatial accuracy-weighted hidden [47] M.J.D. Powell, Approximation Theory and Methods, Cambridge University
Markov random field, Computerized Medical Imaging and Graphics 33 (2009) Press, Cambridge CB2 1RP, 1981.
431–441. [48] Å. Björck, Numerical Methods for Least Squares Problems, SIAM, Society for
[31] F. Destrempes, M. Mignotte, J.F. Angers, A stochastic method for Bayesian Industrial and Applied Mathematics, Philadelphia, PA, 1996.
estimation of hidden Markov random field models with application [49] C.A. Cocosco, V. Kollokian, R.K.-S. Kwan, A.C. Evans, BrainWeb: online interface
to a color model, IEEE Transactions on Image Processing 14 (2005) to a 3D MRI simulated brain database, NeuroImage 5 (1997) 425.
1096–1108. [50] A. Bharatha, M. Hirose, N. Hata, S.K. Warfield, M. Ferrant, K.H. Zou, E. Suarez-
[32] F. Forbes, G. Fort, Combining Monte Carlo and mean-field-like methods Santana, J. Ruiz-Alzola, A.D. Amico, R.A. Cormack, R. Kikinis, F.A. Jolesz, C.M.
for inference in hidden Markov random fields, IEEE Transactions on Image Tempany, Evaluation of three-dimensional finite element-based deformable
Processing 16 (2007) 824–837. registration of pre- and intra-operative prostate imaging, Medical Physics 28
[33] A.R. Ferreira da Silva, Bayesian mixture models of variable dimension for image (2001) 2551–2560.
segmentation, Computer Methods and Programs in Biomedicine 94 (2009) [51] The Internet Brain Segmentation Repository, in http://www.cma.
1–14. mgh.harvard.edu/ibsr/.
[34] B. Haktanirlar Ulutas, S. Kulturel-Konak, A review of clonal selection algorithm [52] S.M. Smith, M. Jenkinson, M.W. Woolrich, C.F. Beckmann, T.E.J. Behrens, H.
and its applications, Artificial Intelligence Review 36 (2011) 117–138. Johansen-Berg, P.R. Bannister, M. De Luca, I. Drobnjak, D.E. Flitney, R.K. Niazy,
[35] L.N. De Castro, F.J. Von Zuben, Learning and optimization using the clonal J. Saunders, J. Vickers, Y. Zhang, N. De Stefano, J.M. Brady, P.M. Matthews,
selection principle, IEEE Transactions on Evolutionary Computation 6 (2002) Advances in functional and structural MR image analysis and implementation
239–251. as FSL, NeuroImage 23 (2004) S208–S219.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy