Three Dimensional (3D) Marker-Driven Watershed

Document Sample
Three Dimensional (3D) Marker-Driven Watershed Powered By Docstoc
					   Three Dimensional (3D) Marker-Driven Watershed
Transform for Volume and Intensity Measurement from
         Medical Images: Preliminary Results

                              Desok Kim and Yong-su Chae

     School of Engineering, Information and Communications University, Daejeon, Korea
                             e-mail : kimdesok, yschae@icu.ac.kr



      Abstract. Quantitative analysis in molecular imaging can detect abnormal
      functions of living tissue. Functional abnormality in gene expression or
      metabolic rates can be represented as altered probe intensity. The accuracy of
      volume and intensity measurement in abnormal tissue mainly relies on
      segmentation techniques. Thus, segmentation is a critical technique in
      quantitative analysis. We developed a three dimensional (3D) marker-driven
      watershed transform for the quantitative analysis. To reduce the discretization
      error under 5%, the size criteria that provide the minimum volume have been
      investigated using digital spheres. When applied to SPECT images, our
      segmentation technique produced the volume and intensity of tumors highly
      correlated with the ground truth segmentation (ρ > 0.93) with at least 89%
      accuracy. The developed method offered a simplified user interaction and
      higher accuracy than a 2D approach. Furthermore, it computed faster than the
      segmentation technique based on the marker-driven gradient modification.
      Keywords: image segmentation, mathematical morphology, marker-driven
      watershed segmentation, 3D measurement.



1 Introduction

The 3D segmentation techniques are applied to many different biomedical images, for
example, for the detection of abnormal tissue such as tumors, visualization or
measurement of body organs such as brains and hearts. Quantitative analysis in
molecular imaging such as PET, SPECT, and optical imaging enables us to
investigate functional abnormalities of diseased tissue objectively in live cells or
small animals. Abnormal gene expression and metabolic rates of target molecules are
represented as altered probe intensity.           Recently, advanced fluorescent or
radioisotopic probes for molecular imaging have been developed so that their
sensitivity and specificity can be quantitatively studied [1].
Watershed is a term used in topography that divides the topographic surface into
many meaningful regions. The raindrops falling on one of topographic regions flow
into the same minimum point and the region is called a catchment basin. The
topographic surface consists of many catchment basins and the lines separating them
can be defined as watershed (or watershed line). When this analogy is implemented
by mathematical morphology, it is called watershed transform. In watershed
transform, the gradient image is used to represent the altitude of the topographic
surface. Watershed transform was first proposed by Digabel, H. and Lantuéjoul,
C.[2]. Later, it was applied to the grayscale images by Lantuéjoul, C. and Beucher,
S.[3]. The speed and accuracy were greatly improved by Vincent and Soille using
immersion-based fast watershed transform [4].
Numerous local minima in random noisy structures cause over-segmentation by basic
watershed transform. To avoid this problem, previous studies suggested multi-
resolution filtering [5], wavelet analysis, adaptive anisotropic diffusion filter, and a
marker-driven image reconstruction [6], [7], [12], [14]. In this study, we developed a
new 3D marker-driven watershed transform based on fast watershed transform to
measure the volume and probe intensity from 3D medical images. Upon applying our
segmentation technique onto the differently sized digital sphere images, we estimated
the minimum volume that reduced the discretization error to less than 5%. The
segmented image, the volume, and probe intensity were compared to the manually
segmented ground truth to obtain the accuracy.


2 Methods

2.1 Three Dimensional Marker-driven Watershed Transform
Based on the fast watershed algorithm [4], the 3D fast watershed transform algorithm
was developed by simply adding an indexing mechanism that retrieves and
manipulates neighboring pixels along the z-axis. Neighborhood operation was
performed on 6 neighboring pixels instead of 26 neighboring pixels to speed up the
immersion process. Then, a marker-driven approach was implemented through
allowing a user to interactively place single background marker and multiple object
markers onto the gradient image. These markers were imposed onto the 3D gradient
image as only minima that were given the initial labels: in the immersion analogy, the
water emerged from each minimum were all labeled uniquely. However, in our
marker-driven approach, the water emerged only from the interactively imposed
marker was uniquely labeled. The water emerged from other irrelevant minima were
not labeled, instead, designated as the unlabeled water.
During the immersion process, the labeling event of unlabeled water occurred when
the labeled water met the unlabeled water (Fig. 1). The water in each catchment basin
is labeled at the gradient intensity level, Xh (Fig. 1.a). Two catchment basins imposed
by the object markers are labeled as “1” and “2”; the background is labeled as “0”; the
catchment basins that contain irrelevant minima are labeled as “U.” At the next level,
Xh+1, the water initially labeled as “1” encounters the water emerged from one of
unlabeled catchment basins at the minimum of the provisional watershed (marked as
“wp” in Fig. 1.b) between the two catchment basins. Then, the unlabeled water
receives the label “1” (Fig. 1.b). At Xh+2, another catchment basin becomes merged
with the catchment basin 1 (Fig. 1.c). Finally at Xh+3, the provisional watershed can
be found between the labeled catchment basins and the background (Fig. 1.d).
   (a)                                           (b)




   (c)                                           (d)




Fig. 1. Illustration of immersion process and labeling event. (a) At Xh, object labels 1 and 2,
background label 0, unlabeled catchment basins U are shown; (b) at Xh+1, the label propagates
into the lower left catchment basin through the minimum of the provisional watershed, wp; (c)
at Xh+2, the middle catchment basin labeled as 1; (d) at Xh+3, the provisional watershed (shown
as thick solid line) along with originally imposed markers (shown as circles)
However, the propagation of labels may become arbitrary when possesses more than
one minimum are possessed on the provisional watershed by a catchment basin
(marked with an arrowhead in Fig. 2.a). Since the propagation of the label is
performed in the raster scanning order, the labeled water whose position takes the
precedence in the raster scanning order always meets the unlabeled water earlier.
Thus, the labeling was basically decided by the minimum whose location took the
precedence in the raster scanning order. To avoid the dependence of watershed on the
marker position, the gradient image can be slightly blurred so that each minimum
becomes unique in terms of its pixel intensity [8].

   (a)                                           (b)




Fig. 2. Illustration of different watersheds showing a dependence of label propagation. (a) The
labeling performed in the raster scanning order; (b) the labeling performed in the reverse order.
To validate our marker-driven watershed segmentation, the 3D segmentation was
performed with a cubic model that contained a brighter inner cubicle (Fig. 3.a). The
3D gradient image of the cubic model was calculated first (Fig. 3.b). Without
imposing a marker, basic watershed segmentation resulted in an over-segmentation
(Fig. 3.c). The object marker and the background marker were imposed onto the
gradient image (Fig. 3.b). When the immersion process stops, watershed was located
in the background pixel if any of its 26 neighboring pixels have labels different to the
background: the marker-driven 3D segmentation produced a sensitive result (Fig. 3.d).

           (a)                                 (b)




           (c)                                 (d)




Fig. 3. Illustration of 3D watershed segmentation of a cubic model with a small cubicle at the
center. (a) 3D image; (b) 3D gradient image with an object marker and a background marker; (c)
basic 3D watershed showing over-segmentation; (d) marker-driven 3D watershed resulting in a
sensitive segmentation.


2.2 Volume Measurement of 3D Object Models
A digital sphere was modeled by drawing circles with the corresponding radii on the
equally spaced z planes using Bresenham’s arc algorithm [9], [10]. The total number
of pixels in the digital sphere was first counted to produce a rough volume of the
sphere. The calculated radius of the digital sphere was obtained from the rough
volume to generate the calculated volume. The measured volume of the digital sphere
was obtained by applying the 3D gradient operator to the digital sphere image and
applying the marker-driven 3D watershed segmentation. Finally, all data were
translated into the millimeter units, assuming the dimension of a pixel as 0.5×0.5×
0.5 mm. The relative errors between two kinds of volumetric data were investigated
in many differently sized digital spheres. The relative errors were also calculated for
digital spheres with different numbers of z planes, simulating the z-axis resolution.
2.3 Volume and Intensity Measurement of SPECT Brain Images
The minimum radii of the digital sphere for different z-axis resolutions were
determined in the Section 2.2. Based on these size criteria, a set of seven SPECT
images were obtained from the Whole Brain Atlas [11]. Brain tumors were visualized
by high Thallium uptake and larger than average 6mm in radius. The resolution of (x,
y) plane was 0.5mm and the one of z-axis was 1.5mm. Ground truth segmentation
images were made by manual following the boundary of 2D tumor images in the
pseudo color. The 3D grayscale images consisted of 20, 24, or 28 two-dimensional
images of 128×128 pixels with grayscale intensity ranging from 0 to 255 (byte data)
(Fig. 4.a). The 3D images were first filtered by a 3D closing operation and a 3D
opening using a 3×3×3 cubic structuring element. This morphological filtering filled
small gaps and suppressed irrelevant noises in the 3D images. Especially, the 3D
closing operation helped to define the 3D gradient more accurately by increasing the
spatial correlation of the intensity between the successive 2D images (thus,
suppressing abnormally high gradient component along the z axis). The 3D gradient
image was obtained from the filtered image, f, using the following 3D gradient
operator, grad3d (Fig. 4.b):

      grad3d (f) = max( f ⊕ Sx – f   Sx , f ⊕ Sy – f   Sy , f ⊕ Sz – f   Sz )      (1)

where ⊕ and        were dilation and erosion operators, respectively; Si was a linear
structuring element three pixels wide along the ith axis; max was the maximum
operator. Then, the 3D gradient image was blurred with a Gaussian function with a
radius of 1 pixel to reduce the dependence to the marker position (image not shown).
The square background marker was manually imposed onto the edge of one of the 2-
D images (Fig. 4.b). The object marker was imposed onto the object in the same 2-D
image where the background marker had been imposed (Fig. 4.b). The marker-driven
3D watershed transform was performed and the 3D watershed was located accurately
localized to the boundary of the tumor area that showed increased probe intensity
compared to the rest of the brain (Fig. 4.c.). The resultant watershed was often many
pixels thick so that only the watershed one pixel distant from the background was
selected as final watershed pixels (Fig. 4.c and Fig. 4.d). The volume and the total
intensity of the segmented tumors were obtained by summing up the pixels and their
intensity values, respectively. The measured values were compared to the ones
measured from the ground truth segmentation (Fig. 4.e). The 2D marker-driven
watershed segmentation was performed on the 3D gradient image since the
computation took less time. However, the segmentation often resulted in an
inaccurate contouring of the tumor mainly due to the lack of the use of z-axis gradient
information (Fig. 4.f).
To estimate the dependence of final segmentation to the marker position, the
segmentation was tried for 11 times by placing the object marker randomly inside the
object. The relative error between the maximum volume and the minimum volume
was less than 2.0% (data not shown) so that the current 3D segmentation method
utilizing manually imposing markers reduced the inherent segmentation sensitivity to
the location of object markers.
  (a)




  (b)




  (c)




  (d)




  (e)




  (f)




Fig. 4. The sequence of marker-driven watershed transform. (a) 3D gray scale image; (b) 3D
gradient image with a background marker (square bracket) and an object marker (solid circle);
(c) thick segmentation result; (d) final segmentation result; (e) ground truth segmentation; (f)
segmentation results of 2D marker-driven watershed transform.
3. Results

3.1 Size Criteria of 3D Object Models
Digital spheres were processed by the developed segmentation method and their
pixels in the segmented spheres were counted and translated into mm3 as the volume.
The relative errors between the calculated volume and the measured volume were
summarized in Table 1. At the z resolution of 0.5 mm, the minimum volume that
showed less than 5% error was 129.4 mm3 with radius of 3 mm. At the z resolution
of 1.0 mm, the error less than 5% first occurred when the minimum volume was 302.8
mm3 with the radius of 4 mm. Finally at the z resolution of 1.5 mm, the minimum
volume was 951.4 mm3 with the radius of 6 mm.

Table 1. The % errors between the measured volume and the calculated volume of digital
spheres.


                 Radius of                               Calculated
Resolution of                  Calculated   Counted
                Bresenham’s                               Volume        % Error
z-axis (mm)                   Radius (mm) Pixels (mm3)
                circle (mm)                               (mm3)

                       1           1.2          6.1          7.9         22.37
                       2           2.1         38.4         41.3          7.17
     0.5               3           3.1        129.4        129.7          0.24
                       4           4.1        296.9        298.3          0.48
                       5           5.2        589.9        590.7          0.14
                       2           2.1         33.3         43.4         23.36
                       3           3.1        118.8        129.7          8.43
     1.0               4           4.1        302.8        298.3          1.49
                       5           5.2        571.3        590.7          3.29
                       6           6.2        972.8      1,000.7          2.80
                       3           3.1        121.1        134.7         10.06
                     4.5           4.6        400.9        429.3          6.62
     1.5               6           6.2        951.4      1,000.7          4.93
                       7           7.2      1,524.4      1,557.0          2.09
                       8           8.2      2,238.8      2,275.9          1.63



3.2 Accuracy of Volume and Intensity Measurement of 3D SPECT Images

To assess the accuracy of the segmentation result, the volume and total intensity were
compared to those obtained from the ground truth segmentation. The volume, total
intensity and their % errors measured in SPECT images are summarized in Table 2.
The maximum % errors of the volume and the total intensity measurement were
10.6% and 8.7%, respectively. Thus, the volume measurement was at least 89%
accurate (Dice coefficient for the volume measurement was 81.2% in average). The
volume and the total intensity from both methods were not significantly different
(Mann Whitney Wilcoxon test, p > 0.1). Two methods produced highly correlated
results (Pearson’s correlation coefficients = 0.94 for the volume and 0.99 for the
probe intensity, p < 0.01). The 2D method produced much less volume and total
intensity, resulting in more than 50.0% error in the volume measurement (data not
shown) mainly due to the incomplete segmentation of tumors (Fig 4.f).

The 3D marker-driven watershed segmentation took 39.9 sec, whereas the 2D
watershed segmentation took 6.7 sec for 128×128×28 size images in 8 bit grayscale.
The computation time was measured on the personal computer with a 3.0 GHz Intel®
Pentium® 4 processor and 1 GB of memory running Windows XP operating system.
Table 2. The comparison of the % errors of the volume and total intensity measured by the
manual method and the marker-driven 3D watershed transform.

                      Volume (mm3)                          Total Intensity
   Image
    No.                      3D          %                            3D         %
              Manual                                Manual
                          watershed     Error                     watershed     Error
     1        1,496.3      1,399.9       6.4         809,916       783,093       3.3
     2         1,241.6       1,295.3      4.3         844,038      808,509       4.2
     3         2,041.1       2,154.8      5.5       1,791,894     1,670,505      6.7
     4         2,166.8       2,197.5      1.4       3,130,800     2,977,221      4.9
     5         2,237.6       2,000.3     10.6       2,273,496     2,075,361      8.7
     6         1,717.9       1,870.9      8.9       2,118,189     2,135,214      0.8
     7         1,497.8       1,404.0      6.2       1,948,716     1,787,643      8.2


4. Conclusions and Discussion
In this study, we estimated the size criteria for the reliable volume measurement
(within 5% error) in several z-axis resolutions using digital sphere images. We
obtained the SPECT images that contained brain tumors that met the size criteria and
applied the developed 3D marker-driven watershed segmentation to measure the
volume and the total probe intensity from brain tumors. The measurement accuracy
for the volume and the probe intensity was approximately 89%, comparable to results
from previous studies [12], [13], [14]. Compared to the 2D approach, the 3D
approach offered a simplified user interaction to mark the object and resulted in more
accurate segmentation since it took advantage of z-axis gradient information. The
developed 3D marker-driven watershed segmentation completed much faster than the
watershed segmentation based on the marker-driven gradient reconstruction [6] which
should perform a time consuming iterative morphology operation such as ultimate
erosion [5].
Acknowledgments. We are grateful to Drs. Keith A. Johnson and J. Alex Becker for
providing the SPECT images. This work was financially supported by the Ministry of Science
and Technology, Korea. Correspondence should be directed to Desok Kim, PhD, Information
and Communications University, 103-6 Munji-dong, Yusung-gu, Daejeon, Korea.


References
1. Kim, K., Lee, M., Park, H., Kim, J.H., Kim, S., Chung, H., Choi, K., Kim, I.S., Seong, B.L.,
    Kwon, I.C.: Cell-Permeable and Biocompatible Polymeric Nanoparticles for Apoptosis
    Imaging. Journal of the American Chemical Society, Vol. 128, Mar. (2006) 3490-3491
2. Digabel, H., Lantuéjoul, C.: Iterative Algorithms. in: J.L. Chermant (Ed.), Proc. 2nd
    European Symp. on Quantitative Anal. Microstructures in Material Science, Biology and
    Medicine, Stuttgart, West Germany, October (1977) 85-99
3. Beucher, S., Lantuéjoul, C.: Use of Watersheds in Contour Detection. in Proc. Int.
    Workshop on Image Processing, Real-Time Edge and Motion Detection/Estimation,
    Rennes, France, Sept. (1979) 17-21
4. Vincent, L., Soille, P.: Watersheds in Digital Spaces: An Efficient Algorithm Based on
    Immersion Simulations. IEEE Trans. Patt. Anal. Mach. Int., Vol. 13, June (1991) 583–598
5. Kim, D.: Multiresolutional Watershed Segmentation with User-Guided Grouping. The
    International Society for Optical Engineering (SPIE), Medical Imaging 1998, San Diego,
    CA, February (1998) 1087-1095
6. Serra, J., Vincent, L.: An Overview of Morphological Filtering. Circuits Systems Signal
    Process. Vol. 11, (1992) 47–108
7. Roerdink, J.B.T.M., Meijster, A.: The Watershed Transform: Definitions, Algorithms and
    Parallelization. Fundamenta Informaticae, Vol. 41. (2000) 187-228
8. Yim, P.J., Kim, D., Lucas, C.: A High-Resolution Four-Dimensional Surface
    Reconstruction of the Right Heart and Pulmonary Arteries. The International Society for
    Optical Engineering (SPIE), Medical Imaging 1998, San Diego, CA, February (1998) 726-
    738
9. Bresenham, J.: A Linear Algorithm for Incremental Display of Circular Arcs.
    Communications of the ACM, Vol. 20, February (1977) 100-106
10. Kennedy, J.: A Fast Bresenham Type Algorithm for Drawing Circles. available at
    http://homepage.smc.edu/kennedy_john/BCIRCLE.PDF
11. available at http://www.med.harvard.edu/AANLIB/cases/case1/case.html
12. Sijbers, J., Scheunders, P., Verhoye, M., van der Linden, A., van Dyck, D., Raman, E. :
    Watershed-Based Segmentation of 3D MR Data for Volume Quantization. Magnetic
    Resonance Imaging. Vol. 15, No. 6, (1997) 679-688
13. Grau, V., Kikinis, R., Alcaniz, M., Warfield, S.K.: Cortical Gray Matter Segmentation
    Using an Improved Watershed-Transform. Engineering in Medicine and Biology Society,
    2003. Proceedings of the 25th Annual International Conference of the IEEE, Vol. 1, Sept.
    (2003) 618- 621
14. Lapeer, R.J., Tan, A.C., Aldridge, R.V.: A Combined Approach to 3D Medical Image
    Segmentation Using Marker-based Watersheds and Active Contours: The Active
    Watershed Method. In Medical Image Understanding - MIUA 2002, Proceedings, (2002)
    165-168