Multi-view 3D Reconstruction with Volumetric Registration in a

Document Sample
Multi-view 3D Reconstruction with Volumetric Registration in a Powered By Docstoc
					       Multi-view 3D Reconstruction with Volumetric Registration in a
                   Freehand Ultrasound Imaging System
                           Honggang Yua*, Marios S. Pattichisa**, and M. Beth Goensb
                             a
                            image and video Processing and Communications Lab,
                               Dept. of Electrical and Computer Engineering,
                                b
                                  Dept. of Pediatrics, Division of Cardiology,
                        The University of New Mexico, Albuquerque, NM, USA 87131


                                                        ABSTRACT

          In this paper, we describe a new freehand ultrasound imaging system for reconstructing the left ventricle from
2D echocardiography slices. An important contribution of the proposed system is its ability to reconstruct from multiple
standard views. The multi-view reconstruction procedure results in significant reduction in reconstruction error over
single view reconstructions. The system uses object-based 3D volumetric registration, allowing for arbitrary rigid object
movements in inter-view acquisition. Furthermore, a new segmentation procedure that combines level set methods with
gradient vector flow(GVF) is used for automatically segmenting the 2D ultrasound images, in which low level of
contrast, high level of speckle noise, and weak boundaries are common. The new segmentation approach is shown to be
robust to these artifacts and is found to converge to the boundary from a wider range of initial conditions than
competitive methods.
          The proposed system has been validated on a physical, 3D ultrasound calibration phantom and evaluated on
one actual cardiac echocardiography data set. In the phantom experiment, two calibrated volumetric egg-shape objects
were scanned from the top and side windows and reconstructed using the new method. The volume error was measured
to be less than 4%. In a real heart data set experiment, qualitative results of 3D surface reconstruction from parasternal
and apical views appear significantly improved over single view reconstructions. The estimated volumes from the 3D
reconstructions were also found to be in agreement with the manual clinical measurements from 2D slices. Further
extension of this work is to compare the quantitative results with more accuracy MRI data.

Keywords: 3D freehand ultrasound, 3D reconstruction, multi-view reconstruction, volume registration, segmentation

                                                    1. INTRODUCTION

Freehand 3D ultrasound imaging techniques can be used to reconstruct 3D objects from a set of registered 2D image
slices. The 2D slices can be located at any arbitrary orientation and position throughout space, and can be acquired
using any standard, 2D ultrasound transducer in conjunction with an orientation and position sensor. This strategy
allows large volumes to be imaged and offers the possibility to upgrade a conventional 2D scanner to a 3D scanner, at a
very low cost. Recently, 3D ultrasound imaging systems have been used for diagnosis in clinical echocardiography[1].

Most research on 3D freehand echocardiography has been focused on reconstructing the left ventricle from a single,
standard view. We propose a new, multi-view reconstruction system, which can combine information from different
sets of 2D slices, resulting in a significant reduction in the reconstruction error. This is useful as for example; multi-
view reconstruction can supply data in shadow regions in single view sweeps by combining information from different
views to improve the quality of reconstruction.



*
    email: honggang@ece.unm.edu, ** email: pattichis@ece.unm.edu
Ultrasound imaging research on multi-view reconstruction methods is primarily focused on reconstructing the left-
ventricle. An example can be found in Ye et al., where a 3D rotational probe and a position sensor are used to track the
parasternal short-axis view and apical long-axis view[2]. Unfortunately, the controller for a rotational probe tended to be
bulky and inconvenient to use. It was limited to a fixed and regular geometry of acquired 2D images. The method
assumes there is a good spatial alignment between different view sweeps. This is impractical in reality. Legget et al.
used a 2D freehand scanning protocol to combine parasternal short-axis, and apical rotation[3]. The 2D images were
manually registered by manual tracing of the left ventricular boundaries with some visual feedback. Neither study
addressed the problem of how to automatically register 3D reconstructions between different views.

Misregistration is a big problem in freehand 3D ultrasound that affects the accuracy of reconstruction and volume
measurement. In general, there are three sources that cause misregistration in freehand 3D ultrasound: (i) error in
position and orientation measurements, (ii) target movement during intra-view (deformation by cardiac motion and
respiration) and inter-view sweeps (rigid movement of the target), and (iii) probe pressure applied on the scanning
surface. We note that rigid movement of the target and inaccurate sensor measurements cause the majority of the
registration errors. Rohling et. al. first reported a automatic registration of multiple sweeps for gall bladder
reconstruction[4]. They used six, slightly different sweeps for acquiring the images. Registration was based on the
correlation of magnitude of the 3D gradient and no landmark identification was required. In our multi-view left
ventricle reconstruction system, 3D volumes from different view sweeps may exhibit partial or no overlap because of
the movement of the patient and position measurements errors. To achieve accurate and robust registration performance,
we use an efficient, geometric registration method. First, we initialize the search of the optimal registration parameters
using a 3D Hotelling transform to construct an object-based reference volume to coarsely register 3D volumes from
different views. Then high accuracy registration is computed using a robust, non-linear least squares method. To avoid
any issues with the strong variability in echoardiographic images, such as muscle tissue, blood flow, much noise and
artifacts in the endocardial cavity, we set pixel values to one around the boundary in the segmented images from each
view, which are common features for the volumes from different views. We have found the proposed registration
procedure to be very efficient and robust, converging to the same registration parameters from a wide variety of
initializations.

Left ventricular volume can be estimated by the 3D surface reconstruction generated by the boundaries segmented from
multiple 2D imaging planes. Manual tracing of cardiac boundaries is tedious, time-consuming and subject to inter-
reader variability. Automatic segmentation techniques of echocardiographic images face a number of challenges due to
the characteristics of the images: poor image contrast, high-level speckle noise, weak endocardial boundaries, boundary
gaps etc. Deformable models have been extensively studied and used for segmentation. There are mainly two types of
deformable models: parametric and geometric deformable models. The evolved curve is explicitly presented by the
parametric forms in parametric models while it is implicitly expressed as level sets of a higher dimensional scalar
function in geometric models. Geometric deformable models can handle topological changes automatically.

Corsi et.al.[5] applied level set techniques to semi-automatically segment real-time 3D echocardiographic data,
reconstruct the shape of the left ventricle and estimate its volume. They threw away the balloon term in the speed
function to prevent the expansion of the evolving curve passing through weak edges and boundary gaps. We have found
that this scheme only works when the initial curves are very close to the actual boundaries. We did not obtain
satisfactory solutions for very weak edges at loose initial conditions. In our approach, we considered integrating the
gradient vector flow (GVF) method[6] , which has been shown to be a powerful external force in parametric deformable
models, with geodesic active contour flow (typical geometric deformable model). GVF can diffuse the image gradient
information toward the homogenous background. We have found three research groups that used three different models
for geometric GVF active contours[7-9]. Paragios et. al. developed a more general, robust and fast geometric GVF active
contour model[9]. We will apply a variation of this approach to automatically segment 2D ultrasound image slices.

The remainder of this paper is organized as follows: section 2 describes the proposed methods in detail. Section 3
presents qualitative and quantitative results for multi-view reconstruction for in vitro and in vivo data. Some initial
results on segmentation are also presented. Concluding remarks and a discussion for future work are given in Section 4.
                                                      2. METHODS

2.1 System and Data Acquisition
Figure 1 shows a block diagram of the 3D freehand ultrasound imaging system. The system has been built at the
Pediatric Cardiology Clinic of the Children’s Hospital Heart Center, at the University of New Mexico. This system has
three components: (i) the ultrasound machine, an ACUSON Sequoia C256 with 7 MHz vector wide-view array
transducer (7V3C), (ii) a six-degrees of freedom, electromagnetic position and orientation measurement device (Flock
of Birds, Ascension, Burlington, VT, USA), and (iii) a desktop computer (Intel Pentium 4, 2GB memory, 2.6G Hz)
equipped with an 8-bit Meteor-Π Standard framegrabber (Matrox, Canada).

                                                    Position &
                                                    Orientatio         PC
                              EPOM                                           Hard Disk
                          (Flock of Birds)

                                                                                 RAM




                                                                            Framegrabber



               Transmitter                                   B-mode Images
                                                             & ECG

                                                      Echo             Ultrasound Machine
                             Sensor                                    (Acuson Sequoia

                                      Transducer




                                      Figure 1. 3D freehand ultrasound imaging
Flock of Birds is an electromagnetic six degrees-of-freedom measurement device that uses pulsed DC magnetic fields to
measure the position and orientation of the sensor relative to the transmitter. The transmitter is approximately in the
shape of a cube with approximate dimensions of 95×95×95mm and was fixed on the top of the nonferrous patient bed.
The receiver has dimensions 25.4×25.4×20.3mm and was attached on the ultrasound transducer by a specially designed
housing. The physical characteristics of the receiver are important since it should not interfere with probe handling
when it is attached to the ultrasound probe. The scanning range of the sensor is within 90cm from the transmitter. 2D
ultrasound video and images (640×480 pixels) with the corresponding orientation and position sensor data are collected
at 30 Hz.

Additionally, an interference detection technique[10], which is based on the estimates of the probability density functions
for both position and angle measurements was performed before initial setup of the 3D system. We want to avoid any
ferrous or electromagnetic interference that might corrupt tracking accuracy from the scanning environment.

2.2 3D reconstructions from multiple-view sweeps
The 3D system uses a set of acquired arbitrary located 2D image slices in space to reconstruct a regular 3D data set.
From each image slice, each pixel position is denoted by a homogeneous position vector and transformed to its
corrected 3D coordinates in the reference volume. Firstly, we transform the 2D image coordinates to 3D sensor
coordinates, relative to the origin of sensor coordinate frame. This transformation is constant throughout the
reconstruction procedure and determined by calibration. A calibration phantom is built by stretching a cotton cross-wire
under a plastic water tank. Multiple B-scans (40-50) are acquired, imaging the centre of a cross-wire from the top and
through the side wall of the water tank. The point target is located using the brightest intensity in the 2D images while
tilting the transducer to align the target with the center of the ultrasound beam. An iterative non-linear least square
algorithm was used to solve the over-determined problem to determine the six parameters of this transformation[11],[12].
Acquired sensor position and orientation measurement data are used to be the second transformation, which transform
the sensor coordinates to 3D world coordinates, relative to the origin of transmitter coordinate frame, which is the
reconstructed volume. Figure 2 depicts the three coordinates in reconstruction.

An averaging strategy is used for combining 3D reconstructions from multiple scanning sweeps. The intensity of each
voxel in the reconstructed volume is estimated by averaging the reconstructed intensities from each view.


                                                                   Sensor

                                                              y

                          Centre of transducer face
                                                               x
                                                      v1              z            Transducer
                           ROI



                                      vy       vx
                                                           ROI point
                                                                              Y                  X

                                                               Image plane
                                                                                         Z
                                                                                  Transmitter



                        Figure 2. Illustration of spatial coordinates in 3D freehand ultrasound imaging
2.3 3D registration using the Hotelling Transform
Automatic registration is necessary and important in multi-view reconstruction. The Hotelling transform (also called
Principal Component Analysis or Karhunen-Loeve transform) is a statistical method that can be used for developing our
registration method. In our case, the data population is the collection of 3D position coordinates of each segmented
object point given as a random vector X = [ X 1 , X 2 , X 3 ] . The mean vector gives the center of gravity of the object
                                                              T


given by
                                                           m X = E {X} .
The covariance matrix is given by
                                                       {
                                               C X = E ( X − m X )( X − m X )
                                                                             T
                                                                                 }.
Each matrix element cii of CX denotes the variance of X i , the ith component of the X vectors. Element cij is the
covariance between components X i and X j . For N vector samples from a random population, the mean vector and
covariance matrix can be estimated using
                                              1 N              1 N
                                        mX =    ∑ xi , CX = N ∑ xi xiT − m Xm XT .
                                             N i =1               i =1

Define A to be a matrix whose rows are the eigenvectors of CX , ordered so that the first row of A is the eigenvector
that corresponds to the largest eigenvalue of CX , and the last row corresponds its smallest eigenvalue. Then transform
                                                     Y = A( X − m X )
is called the Hotelling transform. Cy is a diagonal matrix whose diagonal elements are the eigenvalues of CX and the
components of Y are uncorrelated.

The new position vectors generated by the Hotelling transform are such that the origin is at the centroid of the object,
correcting any object translation, and the transformed object's axes are in the direction of the eigenvectors of CX ,
correcting arbitrary object rotation. The new axes are aligned with the object’s principal axes (eigenvectors). The
maximal sample variance is located in the horizontal axes. Thus, 3D reconstruction from different views can be
registered to the same object-based reference volume by using the 3D Hotelling transform, provided that all views
image the entire 3D object. In general, this assumption could not always hold since we cannot guarantee that the entire
3D object will be imaged. Thus the registration results from using the Hotelling transform are used to initialize a more
accurate registration procedure.

To achieve high accuracy registration, we used non-linear least squares (Levenberg-Marquardt) to estimate the
parameters for rigid-body registration between the views. We note that echocardiographic images from the different
views can appear substantially different, and the majority of the pixels in 2D echocardiographic images exhibit non-
constant features, especially inside the endocardial cavity, where there are muscle tissue, blood flow and much noise
and artifacts, including the use of different gains at different depths. Thus, registration by the original intensity images is
not likely to succeed. Instead we apply a threshold to binarize the ECG-gated endocardial boundaries, which are shared
between different views. For the registration method to converge, the complex surface structures from different views
must exhibit some partial overlap. To help with the overlap computation, we first reconstruct the 3D view with the
largest number of 2D slice planes over a regular Cartesian grid, then register the 2D slice planes from the rest of the
view to it. In our experiment, we have found this procedure to be both efficient and robust.

2.4 Segmentation
3D surface reconstruction is generated using the boundaries segmented from multiple 2D imaging planes. Automatic
endocardial boundary segmentation is generally needed for clinical 3D echocardiography. A new segmentation
procedure that combines level set methods with gradient vector flow (GVF) is used for automatically segmenting 2D
ultrasound images.

Corsi et. al.[5] Applied level set techniques to semi-automatically segment 3D echocardiographic data in 2002:
                                            ϕ t = g ε K ∇ ϕ + β ∇ g g∇ ϕ
          r
Where ϕ ( x, t ) is a higher-dimensional scalar function, called the level set function. It represents the moving front as the
                            r
zero level set where ϕ ( x, t ) = 0 , K is the mean curvature, g is a monotonically decreasing function that satisfies:
 g (0) = 1, lim g ( x ) → 0 . In echocardiographic image segmentation, g is usually used as an edge indicator applied
            x→∞
                           r
to a smoothed image I ( x) :

                                                      ( (                    )⎭
                                                                                     −1
                                                 ⎧
                                                                     ( ))        ⎫
                                                               r             2
                                             g = ⎨1 + ∇ Gσ ∗ I x            α ⎬ .
                                                 ⎩
The vector field ∇g is an advection term that always points towards image boundaries. The parameters β , ε are used
to control the strength of advection and limit the regularization. In our experiments, this approached worked well only
when the initial curve or surface is close enough to the actual endocardial boundaries. To overcome this weakness, we
considered integrating gradient vector flow (GVF) in a geometric deformable model. Paragios[9] used fast GVF
geometric active contours that are robust to the initial conditions and allow active contours to undergo topological
changes:

                                                    ⎢
                                                    ⎣
                                                               $$
                                                              ⎣ ⎦(
                                             ϕt = g ⎡ε K ∇ϕ − ⎡u, v ⎤ g∇ϕ ⎤ .
                                                                          ⎥
                                                                          ⎦      )
Instead of ∇g , ⎡u, v ⎤ is called gradient vector flow(GVF), a two-dimensional vector filed that diffuses the image
                   $$
                  ⎣ ⎦
gradient information toward the homogenous background. It is a bidirectional flow that propagates the curve toward to
object boundary form either side. It has the advantage that it is not sensitive to the initial curve. In our experiments, we
replaced g with coefficient β in front of the GVF vector to control the strength of the GVF boundary advection
vector filed:

                                                                  ⎣ ⎦(
                                            ϕt = g ( ε K ∇ϕ ) − β ⎡u, v ⎤ g ϕ ,
                                                                   $$ ∇
                                                                                      )
which yielded good results.

                                          3. EXPERIMENTAL RESULTS

3.1     Multi-view reconstruction with in vitro data
To compare the reconstruction accuracy for multi-view versus single view reconstructions, we tested the system on both
a 3D calibration ultrasound calibration phantom, (Model 055, CIRS, USA) and in vivo data. These measurements have
the disadvantage of being affected by the unsteady probe pressure, calibration error, spatial locator measurement error
etc. All the 2D images slices were manually segmented. The phantom contains two calibrated volumetric test egg-shape
objects, which can be scanned from two scan surfaces: top and side.

Typical B-scans of a small egg target in the phantom are shown in Figures 3(a) and 3(b). In this experiment, 41 images
are used with a region of interest (ROI) 70×70 pixels in the top window short-axis view and 44 images with ROI
110×60 pixels in the top window long-axis view. The manually segmented object contours are shown in Figures 3(c)
and 3(d). Smoothed (5×5×5 Gaussian filter (σ = 0.65)) single view reconstructions are shown in Figures 3(e) and 3(f),
respectively. We note that the long-axis view reconstruction loses some data in the z-axis direction; this is caused by
missing data samples and inaccurate segmentation. The spatial location of object contours from two views show that if
we reconstruct the object directly, we would not get the correct result. Using our method, the final reconstruction
presents the right shape of object that are shown in figures 3(h).

Quantitative measurements of object volume are listed in Table 1. We show volume estimates from single-view, two-
view, and three-view reconstructions with 3D volume registration. We note that the multi-view reconstruction volumes
with 3D registration are closer to the real volume than any of the single-view reconstruction volumes.
                               (a)                                                (b)




                               (c)                                                (d)




                               (e)                                               (f)




                               (g)                                               (h)
Figure 3 Multiple-view reconstruction with volume registration using in vitro data. (a),(b) typical long-axis and short-axis B-scan
images. (c),(d) contours in long-axis and short-axis view, respectively. (e),(f) smoothed long-axis and short-axis view reconstruction,
respectively. (g) 3D scanning contours in two views. (h) two-view reconstruction by 3D Hotelling registration.
                                    Table 1 Phantom volume estimates and relative error

                       Number of                                                                         Relative
                                            Views                  Number of frames        Volume(cc)
                         views                                                                           Error(%)
                                    Top window, short-axis                41                  7.4218       3.08
                                    Top window, long-axis                 44                  5.1246      -28.83
                         1 view
                                    Side window, short-axis               47                  7.4441       3.39
         Small Egg                  Side window, long-axis                44                  4.7494      -34.04
          (7.2cc)                                                    Short-axis 21
                                         Top window                                           7.4072       2.88
                                                                     Long-axis 22
                        2 views
                                                                     Short-axis 24
                                         Side window                                          7.3697       2.36
                                                                     Long-axis 22
                                                               Top window, short-axis 21
                        3 views       Top & Side window        Top window, long-axis 22       7.0045       -2.72
                                                               Side window, long-axis 22

                       Number of                                                                         Relative
                                            Views                  Number of frames        Volume(cc)
                         views                                                                           Error(%)
                                    Top window, short-axis                43                 72.5433       -0.35
                                    Top window, long-axis                 47                 40.9424      -43.76
                         1 view
                                    Side window, short-axis               49                 76.8417        5.55
         Large Egg                  Side window, long-axis                50                 55.7554      -23.41
          (72.8 cc)                                                  Short-axis 22
                                         Top window                                          70.1324       -3.66
                                                                     Long-axis 16
                        2 views
                                                                     Short-axis 25
                                         Side window                                         75.3381       3.49
                                                                     Long-axis 17
                                                               Top window, short-axis 15
                                                               Top window, long-axis 12
                        3 views       Top & Side window                                      73.0414       0.33
                                                               Side window, long-axis 13



3.2      Multi-view reconstruction with in vivo data
In this experiment, data sets were acquired from two acoustic windows: the parasternal short-axis view and the apical
long-axis view from a healthy child volunteer. The ultrasound video images were acquired at 30 frames per second.
Each single view scanning that were expecting to imaging the whole of the left ventricle. Each single view sweeps can
be fully finished in 15 seconds (450 images) with respiratory and ECG gating. 2D images were captured continuously
while freely manipulating the ultrasound probe to scan the left ventricle. There is no need to require that the volunteer
remains still during the different view sweeps. And that is not easy for a patient in typical clinical practice. Our method
can correct any rigid motion caused by the patient’s movement. The reconstruction and volume measurements were
performed at end-of-diastole and end-of-systole phases separately. The end-of-diastole (ED) phase was defined as the
first frame after the electrocardiographic R wave in which the mitral valve was closed and it was verified using M-mode
images. The end-of-systole (ES) frame was selected from a sequence of frames in which the endocardial cavity visually
appeared smallest.

Figure 4 show 3D reconstructions using image slices from the parasternal short-axis and apical long-axis views in the
ED and ES phases. Figures 4(a) and (b) show 2D image slices from each view. The locations of the scanning slices from
each view in 3D space are shown in Figures 4(c) and (d), respectively. Two volume contours are manually segmented
and shown in (e). Here, 14 images at the ED phase are used with a region of interest (ROI) of 240×210 pixels in the
parasternal short-axis view and 9 images with an ROI of 240×220 pixels in the apical long-axis view. Note that the two
volumes overlapped over a very small region. Using the proposed volume registration method, the two view sweeps can
be aligned together and thus get a much better reconstruction of the left ventricle compared to each single view
reconstruction. Figures 4(f)-(h) show 3D reconstructed endocardial surfaces from the parasternal short-axis view, the
apical long-axis view, and combined views, respectively at the ED phase. A two-view reconstruction at the ES phase is
shown in Figure 4(i).
                                         (a)                                                (b)




                          (c)                                        (d)                                        (e)




                    (f)                        (g)                               (h)                           (i)

Figure 4 Multiple-view reconstruction with volume registration using in vivo data. (a), (b) 2D image slices from parasternal short-
axis and apical–long axis view, respectively. (c),(d) locations of the scanning planes of each view respectively. (e) two volume
contours in 3D space. (f)-(h) show 3D reconstructed surfaces from the parasternal, apical and combined views, respectively at ED
phase. (i)two-view reconstruction at ES frame.
The volume of the left ventricle was estimated from the reconstructions at the ED and ES phases separately as
summarized in Table 2. The values in % indicate the relative error with respect to the echocardiography specialist
measurement. The results show that the volume estimated using the combined two acoustic views reconstruction agrees
better with the volume measurement by the echocardiography specialist using a standard 2D clinical technique. A
more objective and accurate comparison would be to compare these results with cardiac magnetic resonance (MR) data.
MRI is a more accurate and reliable reference standard for cardiac parameters and heart function measurement.

                                 Table 2 Left ventricular volume estimates and relative error

                        Echocardiography
          Phases                                 Parasternal short-axis        Apical long-axis   Two views
                         Specialist (cc)
                                                     47.6453(cc)                 31.2359(cc)      46.1567(cc)
           ED                 44.9
                                                         6.1%                      -30.4%            2.8%

                                                     20.2773(cc)                 13.8729(cc)      19.7050(cc)
            ES                18.1
                                                        12.0%                      -23.4%            8.9%



3.3 Initial segmentation results
Ultrasound phantom images have been used for testing of the proposed segmentation method. We set the parameters as,
 ε = 0.8, α = 0.1, β = 6. Figure 5 shows the comparison of the GVF geodesic active contour model and the Corsi[5]
level set technique on a simulated image and an ultrasound phantom image. The simulated image was formed by adding
speckle noise as J = J + n × I , to a grayscale image, where n is uniformly distributed random noise with zero mean
and variance equal to 2 . Good results are obtained by first smoothing the image with a Guassian kernel ( σ = 4 ) before
applying the GVF geodesic active contour method. We note that the strength of ∇g is too small to pull the
propagating curve, especially when the expansion term is removed. We got a similar result even when we used the
expansion term.

Figure 6 shows satisfying segmentation results of two phantom ultrasound images for the small egg-shape object. All of
the images have high-level speckle noise, very low contrast and weak edges. No matter how we initialize the
propagating curve, inside or outside or even across the object, the proposed GVF geometric active contours still
converge to the actual boundary of the object (Figure 5(c)).

                                     4. CONCLUSIONS AND DISCUSSION

In this paper, a new freehand 3D ultrasound imaging system has been developed. The proposed multi-view
reconstruction system has been shown to give more accurate 3D reconstructions from both in vitro and in vivo data
compared to the commonly used, single view reconstructions. Unlike previous methods, the new approach does not
require that there is good spatial alignment between the different view sweeps, and it does not require manual
registration by the users. Robust reconstruction is achieved through a geometric registration technique, which combines
information from multiple views and improves on the reconstruction quality with the number of distinct views used.

Our experimental results show that the new geometric deformable model, which combines level sets and gradient vector
flow, is more robust to speckle noise and weak edges that are often found in ultrasound images. Also, this segmentation
approach is robust with respect to previous semi-automatic segmentation methods used for echocardiographic images. A
natural extension of this work is to apply this segmentation method to echocardiographic image sequences that were
acquired from multi-view sweeps. Then, we will investigate the use of the proposed multi-view reconstruction
procedure using automated segmentation and registration.
                                                        (a)




                                                       (b)




                                                       (c)
Figure 5. (a) Curve evolution through time for GVF GAC (left), normalized GVF field (middle), final segmentation result for GVF
GAC (right); (b) Curve evolution through time for Corsi’s method, normalized ∇g field (middle), final segmentation result for
Corsi’s method (right); (c) Curve evolution and final segmentation result on a phantom image by GVF GAC (left two images), and
for Corsi’s method (right two images).




                                                              (a)




                                                              (b)

Figure 6. Boundary extraction using GVF GAC model for ultrasound phantom images. (a) small egg long-axis image; (b) small egg
short-axis image. From left to right, the columns are original images, propagations of curve and final results.
The proposed method needs to be further validated on additional data sets. In our experiment, we found our data set had
low-quality images and did not follow a good scanning protocol that would have resulted from breath-holding, optimal
view selection, and imaging the entire left ventricle. Thus, we would expect that better results could be achieved on
more carefully collected data sets.

Currently, we simply average all the single view reconstructions, in order to produce the final 3D reconstruction and
volume estimate. We note that an advantage of multi-view reconstructions is that it allows us to consider weighted
averaging methods, where shadowing and other artifacts can be removed from single view reconstructions, by simply
putting zero weights over shadow and artifact regions. We also note that this approach allows for arbitrary rigid motions
between views, which can correct patient movements during inter-view sweeps. The proposed methods could be
extended to provide 3D cardiac reconstructions throughout the cardiac cycle, using cardiac and respiratory gating.

                                                   REFERENCES

1.  A.R.Snider, G.A.Serwer, S.B.Ritter, and R.A.Gersony, Echocardiography in pediatric heart disease, 2nd ed, St.
    Louis : Mosby, 1997
2. X.Ye, J.A.Noble and D.Atkinson, “3-D freehand echocardiography for automatic left ventricle reconstruction and
    analysis based on multiple acoustic windows”, IEEE transaction on medical imaging, vol.21, pp. 1051-1058, 2002.
3. M.E.Legget, D.F.Leotta, E.L.Bolson, J.A.McDonald,          R.W.Martin, X.N.Li, C.M.Otto, and F.H.Sheehan,
    "System for quantitative three- dimensional echocardiography of the left ventricle based on a magnetic-field
    position and orientation sensing system," IEEE transactions on biomedical engineering, Vol.45, pp.494-504, 1998.
4. R.N.Rohling, A.H.Gee, and L.Berman, “Automatic registration of 3-D ultrasound images”, Ultrasound in medicine
    & biology, Vol.24, pp.841-854, 1998
5. C. Corsi, G. Saracino, A. Sarti, and C. Lamberti, “Left ventricular volume estimation for real-time three-
    dimensional echocardiography”, IEEE transaction on medical imaging, vol.21, pp. 1202-1208, 2002.
6. C. Xu, J.L. Prince, “Snake, shapes, and gradient vector flow”, IEEE transactions on image processing, vol.7,
    pp.359-369, 1998
7. C. Xu, A.Yezzi, J.L. Prince, “On the relationship between parametric and geometric active contours”, Proc. of 34th
    Asilomar Conference on Signals, Systems, and Computers, pp.483-489, 2000
8. X. Hang, N.L. Greenberg, J.D. Thomas, “A geometric deformable model for echocardiographic image
    segmentation”, IEEE Computer Society, Los Alamitos, CA; Computers in Cardiology, pp.77-80, 2002
9. N. Paragios, O. Mellina-Gottardo, and V. Ramesh, “Gradient vector flow fast geometric active contours”, IEEE
    transactions on Pattern Analysis and Machine Intelligence”, Vol. 26, pp.402-407, 2004
10. Honggang Yu, Marios S. Pattichis and M. Beth Goens, “A Robust Multi-view Freehand Three-dimensional
    Ultrasound Imaging System Using Volumetric Registration”, Proceedings of IEEE International Conference on
    System, Man and Cybernetics, October 2005.
11. P. R. Detmer, G. Bashein, T. Hodges, K. W. Beach, E. P. Filer, D. H. Burns, and S. D. E. Jr., "3D ultrasonic image
    feature localization based on magnetic scanhead tracking: In vitro calibration and validation," Ultrasound in Med.&
    Biol., vol. 20, pp. 923-936, 1994.
12. D. F. Leotta, P. R. Detmer, and R. W. Martin, "Performance of a miniature magnetic position sensor for three-
    dimensional ultrasound imaging," Ultrasound in Med.& Biol., vol. 23, pp. 597-609, 1997.