Docstoc

sf

Document Sample
sf Powered By Docstoc
					        Extracting Animated Meshes with Adaptive Motion Estimation

                                      Nizam Anuar and Igor Guskov

                               University of Michigan, Ann Arbor, USA
                           Email: {nanuar,guskov}@eecs.umich.edu



Abstract                                               pling from frame to frame. Reconstruction of ani-
                                                       mated mesh sequences from such data is an impor-
We present an approach for extracting coherently       tant problem we aim to address in this paper.
sampled animated meshes from input sequences of
incoherently sampled meshes representing a con-
tinuously evolving shape. Our approach is based
on multiscale adaptive motion estimation proce-
dure followed by propagation of a template mesh
through time. An adaptive signed distance volumes
are used as the principal shape representation, and
a Bayesian optical flow algorithm is adapted to the
surface setting with a modification that diminishes
the interference between unrelated surface regions.
Additionally, a parametric smoothing step is em-
ployed to improve the sampling coherence of the
model. The result of the proposed procedure is a       Figure 1: Comparison of trajectories for three meth-
single animated mesh. We apply our approach to         ods of particle propagation for the jump sequence:
the human motion data.                                 fitting, temporal prediction followed by fitting, our
                                                       estimated flow followed by fitting. The left two
                                                       propagations fail, the right one succeeds to preserve
1   Introduction                                       the surface sampling.

Animated meshes are widely employed in char-              We restrict our effort to the simpler scenario of
acter animation, visualization, and computational      unchanging topology both in the input data and in
simulation applications. An animated mesh is a         the desired output. A single mesh template of the
sequence of meshes with the same connectivity          same connectivity is propagated through time and is
whose vertex positions change in time. This is a       fitted to the input surface data in every frame of the
convenient representation for dynamically chang-       sequence. The main challenge is to establish corre-
ing shapes, with many processing, rendering, and       spondence between consecutive shapes. Computing
compression tasks easily handled. For instance,        such correspondence relation constitutes the main
modern compression methods can take advantage          contribution of this paper. We assume that the in-
of the temporal coherence present in animated mesh     put surfaces are closed and without boundaries; the
data, resulting in a very compact shape represen-      first step of the algorithm converts each input shape
tation [IR03][KG]. Unfortunately, several recently     into an adaptive volumetric representation with a
introduced state of the art dynamic shape acqui-       signed distance transform. Once a volumetric rep-
sition methods [SMP03][ZCS03] do not produce           resentation is obtained, we run a Bayesian motion
their result in the animated mesh form; rather a se-   estimation procedure similar to a differential optical
quence of meshes of varying connectivity is pro-       flow approach from image processing. The result-
duced. Volumetric morphing and isosurface extrac-      ing vector field defined on the surface is used for the
tion are also examples of applications that produce    initial propagation of the mesh template. The fur-
evolving surfaces that are not meshed consistently     ther fitting and parametric smoothing steps result in
in time, and have changing connectivity and sam-       a temporally coherent animated mesh sequence.

VMV 2004                                                           Stanford, USA, November 16–18, 2004
   Our main contribution is to show that for contin-         steps.
uously evolving surface data the propagation by an              The estimation of the surface flow has also been
adaptively estimated flow field can help to establish          used as a part of non-rigid 3D scene reconstruction
temporally coherent meshing of the dynamic sur-              in the shape and motion carving approach of Vedula
face.                                                        et al [VBSK00]. In their work, the flow field is ex-
                                                             tracted based on the visibility and color consistency
1.1   Related work                                           of voxels, and the shape information is extracted si-
                                                             multaneously. In our approach, no color informa-
A similar problem of creating a single coherent ge-          tion is currently used, and the shapes are known be-
ometry representation for a number of shapes has             forehand.
been recently addressed by computer graphics re-                Our approach can be considered as a remeshing
searchers. For instance, when several range scans of         procedure for time-dependent shape data. Briceno
the same individual are given, Allen et al.[ACP02]           et al. [BSM∗ 03] have implemented dynamic
solve the problem of fitting a subdivision surface            remeshing for the purpose of mesh compression,
template to create a posable 3D model. That work             however in their work all the input meshes share the
was extended to handle the parameterization of               same connectivity. In contrast, our input meshes do
whole-body data for multiple humans [ACP03]. In              not have to be of the same connectivity.
both cases, a sparse set of 3D markers is used for              Our work is also related to the volumetric mor-
model registration. Related work on finding consis-           phing with one distinction: we assume that our
tent parameterizations of dissimilar shapes by Praun         meshes are instances of a continuously evolving
et al. [PSS01] has also used user-defined feature             shape, while the main difficulty in morphing is
markers in a multiresolution remeshing procedure.            the ability to handle drastically different shapes
   Neither of the studies mentioned above explic-            [WB98][BP98].
itly considers a continuously evolving surface. In
this paper, we concentrate our effort on extracting
a single animated mesh from a continuously evolv-            2   Overview
ing shape sequence. We do not assume the avail-
ability of markers or surface textures in the origi-         The input data for our algorithm is a sequence of
nal data, with the hope that strong shape coherence          triangular meshes M(t) that represent some non-
will give enough information for extracting surface          rigidly moving closed surface. Meshes M(t) are
motion. We split our problem into the motion esti-           not required to have the same connectivity; for in-
mation and the mesh propagation parts. The mo-               stance, they could come from a marching cubes ex-
tion estimation problem has long been studied in             traction, run independently at every time frame.
the computer vision community [BB81]. In partic-                Our goal is to create an animated mesh sequence
ular, several efficient optical flow algorithms were           A(t) that approximates M(t) and such that all the
introduced [SS96][Sim93]. We extend the multi-               meshes A(t) have the same connectivity. Our ap-
scale optical flow algorithm of Simoncelli [Sim93]            proach will be to start with a template mesh A(0)
to work on adaptive signed distance volume repre-            that approximates the initial mesh M(0), and prop-
sentations and modify it to remove unneeded inter-           agate it through time frame by frame. We observe
actions between unrelated surface patches.                   that, without global motion estimation step, simple
   Our motion estimation and surface fitting proce-           heuristics (like fitting a previous frame to the next,
dures require a volumetric shape representation, and         or using constant velocity assumption to predict the
an efficient conversion of input polygonal data is            location of the next frame) fail miserably on non-
necessary for the overall good performance of the            trivial motions (see Figure 1).
algorithm. We employ an adaptive shape represen-                Therefore, we employ a motion estimation pro-
tation similar to the ADF representation of Frisken          cedure that operates on the sequence of adaptive
et al. [FPRJ00]; due to some differences and to              signed distance volumes g(t) approximating input
avoid confusion we generically call it ASDV (adap-           mesh data. An adaptation of the differential opti-
tive signed distance volume). We convert our input           cal flow procedure is applied to consecutive frames
meshes into a sequence of ASDV datasets and use              of adaptive signed distance volumes to obtain the
them for both motion estimation and surface fitting           surface flow at every frame of the sequence; the

                                                       666
extracted flow is represented on the same adaptive              termined on the coarse scale with ever smaller cor-
octree structure as the signed distance, and repre-            rections applied on finer levels. In our algorithm we
sents the correspondence relation between consec-              adopted a version of the Bayesian Multi-Scale Dif-
utive surfaces in the sequence. Any motion estima-             ferential Optical Flow (BMSDOF) algorithm of Si-
tion algorithm only gives an approximate mapping               moncelli [Sim93]. The following section will give
between two surfaces; however, we find that a com-              a synopsis of BMSDOF, after which we shall de-
bination of propagation via extracted flow and sur-             scribe our implementation of it in the adaptive vol-
face fitting produces coherently sampled animated               umetric case.
meshes.
   Our combined propagation procedure is per-                  3.1   Basics of the BMSDOF algorithm
formed as follows:
  1. Convert input meshes into adaptive signed dis-            BMSDOF algorithm [Sim93] belongs to the fam-
     tance volumes g(t).                                       ily of differential motion estimation algorithms that
  2. Extract surface flow v(t) for every frame of               rely on solving the brightness constancy constraint
     the sequence.                                             equation:
  3. Propagate every vertex of the template mesh                                 gt + (v · )g = 0,               (1)
     A(t) with the computed flow from its positions             that is, finding v(x, t) for a given g(x, t). In our
     at time t to the new positions at time t + 1.             case, g(x, t) represents a volumetric signed dis-
  4. Apply parametric smoothing and fitting until               tance function whose zero level surface approxi-
     convergence similar to [WDSB00] (also see                 mates the shape of M(t).
     Section 4).                                                  Typically, one solves the equation (1) in a local
   The following two sections describe the details             neighborhood of a point q by adding regularization
of our surface motion estimation algorithm and our             term on the magnitude of the estimated v and ob-
mesh propagation procedure.                                    taining a least-square solution to the system of con-
                                                               straints posed by (1) on a discrete stencil around q,
                                                               with the resulting estimator expressed as:
3   Motion estimation
                                                                                         −1
                                                                            v (q, t) = −Mreg
                                                                            ˜                           wk b(k)   (2)
The purpose of the motion estimation step is to pro-
                                                                                               k∈K(q)
duce a vector field v(t) that approximates the mo-
tion of the surface at time t; that is, v(t) should be               with   Mreg = σI +              wk M (k),
such that propagating a particle positioned on sur-                                         k∈K(q)
face M(t) at time t along the flow vectors v(t)
                                                                             where σ is a small regularization coef-
should place it on the surface M(t + 1) at time
                                                                          ficient, the filter wk represents Gaussian
t + 1. For more precision, we employ a multi-step
                                                                          averaging, and the summation is over the
Runge-Kutta like procedure that uses both v(t) and
                                                                          stencil K(q); an example of K(q) is pic-
v(t + 1) when propagating between frames t and
                                                                          tured as blue dots on the left, q being the
t + 1. Therefore, we need the vector field v(t) to
                                                               yellow dot in the center. The matrices M (x) and
be defined not only on the surface but also in some
                                                               vectors b(x) are defined as follows:
neighborhood around it. An adaptive representa-
tion of v(t) that has most precision on the surface                     2
                                                                          gx     g x gy g x gz
                                                                                                          
                                                                                                             gx gt
                                                                                                                    
M(t) and gradually coarsens away from the surface                  def
                                                               M =  g y gx        gy2                def
                                                                                         gy gz  , b =  g y gt  .
is then appropriate.                                                     gz gx gz gy      gz2
                                                                                                             gz g t
   It is easy to see that the problem of finding v(t)
is not well defined without some regularization, as             Given the values of g on a regular grid, we can
there are many vector fields that propagate the sur-            approximate M (x) and b(x) at the centers of grid
face of one frame onto the surface of the next one.            cells using central differences.
One way to regularize the problem is to require the               The simple local algorithm described above
estimated flow to be smooth, and many practical al-             would typically fail to discover motions on the scale
gorithms for ensuring such smoothness are built in a           larger than the grid step size. Typically, a coarse-
coarse to fine fashion, so that the overall flow is de-          to-fine approach would give better results, by first

                                                         666
estimating the flow on the coarse level and then                              to the appropriate “previous” and “next”
using this solution as a prior for the finer level.                           fields of the frame t ASDV data structure.
BMSDOF handles such prior information by pre-                                Thus, the values of three consecutive
warping. This can be described as follows: sup-                              frames g(·, t − 1), g(·, t), g(·, t + 1) are
pose a prior flow estimate u is known, and one                                sampled at the same spatial locations and
would like to improve that estimate given the ac-                            stored within a single adaptive octree.
tual data g. We express the updated flow v as                           (b)   Compute the flow v L0 (x, t) at all the ver-
the sum of u and the unknown residual w so that                              tices of the cells at level L0 and below.
         def
v(x, t) = u(x, t) + w(x, t). It can be shown that                            See Section 3.4.
the fine scale residual w satisfies the following con-                   (c)   Linearly interpolate the flow v l−1 (x, t)
straint:                                                                     onto vertices of the next finer level l. This
                                                                             produces the prior flow ul (x, t).
∂
   W[u, g, t](x, s) + (w ·   )W[u, g, t](x, s) = 0,                    (d)   Warp the distance fields from frames t−1
∂t
                                                 (3)                         and t + 1 using (4) and assign them to the
where W[u, g, t](x, s) represents the field g warped                          appropriate “previous” and “next” fields
from time t to time s using flow u:                                           of the frame t ASDV data structure. Use
                                                                             the prior flow ul (x, t) for warping.
                        def
      W[u, g, t](x, s) = g(x + (t − s)u, t)         (4)                (e)   Repeat the estimation step for residual
                                                                             flow wl (x, t) at all the cells of level l and
Equation (3) has the same form as (1), except that                           below using (3) (see also Section 3.4).
a pre-warped version of the distance field g is used.                         Update the current flow with the result
Hence, equation similar to (2) can be used to solve                          v l = ul + wl . If not done go to the step
for the residual correction w.                                               3c.
   Applying this warp-update procedure in a coarse-
to-fine fashion results in the BMSDOF for image
sequences described in [Sim93]. The following sec-               3.3   Adaptive Signed Distance Volume
tion will describe its realization in the adaptive esti-               Construction
mation of surface flow.
                                                                 We use an adaptive signed distance volume repre-
                                                                 sentation to store our distance and flow field data.
3.2   Adaptive motion estimation algorithm                       Given a mesh for frame t we extract an adaptive oc-
Our adaptive motion estimation algorithm has the                 tree whose leaves at the finest level contain the sur-
following three steps:                                           face, and whose structure satisfies the 26-restriction
  1. Compute ASDV for three consecutive geome-                   criteria that ensures that none of the 26 possible
     try frames M(t − 1), M(t), and M(t + 1).                    neighbors of a cell is at a level that differs by more
     The result of this procedure is the adaptive vol-           than one from the cell’s own level. This restriction
     umetric approximation of the signed distance                is important for more gradual transition from the re-
     function for these three frames g(·, t − 1),                fined surface region to the far field. Our adaptive
     g(·, t), and g(·, t + 1). For details, see Sec-             structure will typically have more refinement than
     tion 3.3                                                    the ADF from [FPRJ00] because we need a fine grid
  2. Run an approximate medial axis detection pro-               near all the surface locations for accurate approxi-
     cedure and tag the corresponding cells of the               mation of the distance field derivatives that are used
     ASDV of the frame t. For details, see Sec-                  in the flow estimation procedure.
     tion 3.4.                                                       Our ASDV extraction proceeds as follows: first,
  3. Run the adaptive version of the flow extrac-                 a scan conversion procedure is performed at a pre-
     tion procedure starting from some level L0 .                defined finest level of the hierarchy, each intersected
     (All the computations are performed within                  cell is inserted into the octree causing it to refine ap-
     the ASDV structure of the frame t.) This re-                propriately. Then the fine-to-coarse restriction en-
     quires the following five substeps:                          forcing procedure is performed resulting in the fi-
      (a) Interpolate the distance fields from                    nal octree structure of the ASDV (see the pictures
           frames t − 1 and t + 1 and assign them                on the left). The inside/outside testing is performed

                                                           666
that assigns inside/outside flag for all the vertices             dial axis. In our experiments we define med = 0.9,
of ASDV, additionally the vertices of the leaf cells             the resulting leaf medial axis cells of the octree are
intersected by the surface are assigned signed dis-              shown in the figure on the left: only the leaves
tance values. At this point a fast marching method               starting with level seven and up are shown. One
[Set99] is applied to assign signed distance values              can clearly see the three main components of the
to all the remaining vertices of the ASDV structure.             approximate medial axis separating two legs from
                                                                 each other and separating arms from the body. We
3.4   Restricted Flow Estimation                                 only use the outside portion of the medial axis, so
                                                                 that the motion coherence is preserved inside the
             The flow field at time t is used to prop-             body, while the unneeded interference through the
             agate the mesh vertices that lie on the             “thin air” is diminished.
             surface, hence we need the full preci-                               We use this information in our flow
             sion flow to be computed near the sur-                             estimation routine by excluding the gra-
             face, while away from the surface we                              dient values in cells that are “behind
             can use a coarser less accurate approx-                           the medial axis” from the averaging in
             imations. Therefore, we use the struc-                            equation (2). For an illustration, con-
             ture of the surface ASDV at time t to               sider the two-dimensional case for the four-by-four
             compute and store the flow at time t.                averaging mask. The following simple criteria are
                BMSDOF algorithm can be easily                   used to determine whether a cell’s gradient value
             adapted to work on the adaptive vol-                is to be included in the averaging: a red cell is in-
             ume representation. The only differ-                cluded only if it is not on the medial axis. A blue
             ence is that the local evaluation of dis-           cell is included only if it is not on the medial axis
             tance field may require traversing the               and the red cell separating it from the center of the
octrees in order to compute a needed distance value.             stencil is not on the medial axis. In the special case
Within a single level, the procedure starts by com-              when all the red cells are on the medial axis, we
puting the gradients of all the cells in the four-by-            take the averaging only on the central two-by-two
four cell stencil around a given vertex. Some of                 stencil. Once the inclusion is determined we renor-
the cells covered by the stencil may not be subdi-               malize the mask coefficients to sum up to one.
vided. In such cases the value of the gradient from
the coarser level is used. Because of the restriction
criteria used to build our ASDV structure, we can                4   Mesh propagation
only have a single level difference between the cells
of the stencil, which is beneficial for the quality of            Our flow-based surface propagation extracts the
derivative estimation.                                           shape evolution, however in flat surface regions
                        In order to diminish the ef-             there exists an inherent uncertainty in estimating
                     fects two close but differently             the tangential component of surface motion. In or-
                     moving surfaces can have on                 der to improve the parametric coherence of the ex-
                     each other, we compute an ap-               tracted mesh sequence we employ a variant of the
                     proximate medial axis by look-              mesh smoothing technique of [DMSB99], modified
                     ing at the gradient of the signed           so that only the tangential smoothing component is
                     distance function within each               used as described in [WDSB00]. We compute the
                     cell (for an overview of approxi-           coefficients of the smoothing operator based on the
                     mate medial axis algorithms see             geometry of the mesh in the first frame. The appli-
                     [MFV98]). In the cells away                 cation of the parametric smoothing improves mesh
                     from the medial axis the abso-              quality in flat regions of the propagated mesh and
                     lute value of the gradient vec-             tries to preserve the sampling pattern of the mesh
                     tor should be close to one (up              from the first frame.
                     to approximation error), thus we               Special care should be taken when two surfaces
                     tag all the cells with the absolute         are close to each other. In order to eliminate spu-
                     value of gradient below some                rious jumping of vertices onto the wrong side, we
predefined threshold med as intersecting the me-                  compare the normal vectors computed from the

                                                           666
mesh with the normal derived from ASDV. If those                the first frame and propagate it through long se-
vectors point in the opposite directions, we reposi-            quences of motions given in the original data (the
tion the offending vertex to the point predicted by its         human motion sequence we processed contained
immediate mesh neighbors. This solves the prob-                 685 frames). A typical result of the resulting ani-
lem for the underarm regions of human shape data.               mated mesh is given in the supplementary archive.
                                                                Figure 5 shows the mesh after 200 time steps of
                                                                propagation. Note that no texture or marker infor-
5   Results and discussion                                      mation is made available to the algorithm, thus one
                                                                cannot expect to completely preserve sampling pat-
We have implemented a library of adaptive motion                terns of the original mesh template.
estimation for evolving meshes. The typical tim-                   We used two versions of the human template:
ings for a 2GHz Pentium PC processing the hu-                   the original first frame of the human skin data with
man motion data (mesh sizes around 16K vertices                 16K vertices and a refined version with 60K ver-
each frame): level 9 ASDV construction for a single             tices. For the coarser template, we had problems
frame – approximately 20 seconds, the flow compu-                with the smoothing of the extremities of the shape
tation on all the levels – approximately 2 seconds.             (hands and feet). We improved the performance of
The surface fitting step typically takes less than a             the algorithm on those regions by selecting them in
second. We computed the flow starting with level                 the first frame and decreasing the amount of para-
six. The typical surface reconstruction L2 errors as            metric smoothing in those regions by the factor of
reported by Metro [CRS98] were about 3 × 10−4                   ten. For the refined version of the template, there
relative to the diagonal of the bounding box of the             were no problems on the extremities. Examples of
mesh (that is, the reported L2 error is divided by              both propagated templates are provided with sup-
the length of the bounding box diagonal of the first             plementary archive.
frame).                                                            We have checked the performance of our ani-
                                                                mated mesh extraction algorithm with the test ex-
                                                                ample of a jumping cactus. Figure 3 shows the
                                                                trajectories of the original animated model (which
                                                                had the same connectivity) and the model extracted
                                                                from the corresponding signed distance data using
                                                                our motion estimation algorithm. We see that while
                                                                each individual trajectory may not be close, the
                                                                overall character of motion and the spatial distribu-
                                                                tion of particles are similar. Figure 4 shows that the
                                                                relative L2 error stays within 0.04%.




Figure 2: Shaded version of the propagated tem-
plate for the human sequence.
   We applied our animated mesh extraction algo-
rithm on several datasets, ranging from simple test
cases like jumping cactus sequence to the captured
deforming human skin data from [SMP03]. We                      Figure 3: Comparison of the trajectories of the ver-
analyzed the performance of the algorithms using                tices of the original animated cactus (left) and the
several evaluation criteria including visual quality,           cactus extracted with our algorithm (right). Both
mesh distortion, and error with respect to the orig-            models are rendered at the end of the jump se-
inal data. Our experiments take the mesh from                   quence.


                                                          666
   Figure 5 shows the consecutive four frames of the                                                        processed surface data corresponds to the “skin”
human data to give an idea of the amount of motion                                                          of a real deforming object. For such closed sur-
between frames the algorithm is able to handle sat-                                                         face data it is natural to use signed distance vol-
isfactorily. Figure 4 shows the plot of the relative                                                        umes. An easy extension of the algorithm can em-
L2 error (w.r.t. the bounding box diagonal of the                                                           ploy the non-signed (positive) distance volumes to
first frame) for the human and cactus sequences.                                                             handle open surfaces with smooth and temporally
The supplementary video shows the animation of                                                              coherent boundaries. We have experimented with
the extracted mesh textured in the first frame and                                                           simple animated meshes with boundaries, and used
animated using a publicly available rendering pack-                                                         additional distance fields corresponding to bound-
age Jot [Jot]. Some frames of the animation are                                                             ary curves in order to guide the flow estimation pro-
shown in Figure 6. While there is a certain amount                                                          cedure so that it prefers to map boundaries onto
of texture sliding within the surface, the overall re-                                                      boundaries in consecutive frames. From the prac-
sult is reasonably stable. Note that the area near the                                                      tical perspective, however, it is much more impor-
neck is noisy due to the chaotic neck region mo-                                                            tant to handle surfaces with noisy boundaries such
tion in the original data. The shaded version of the                                                        as the ones coming from real-time structured light
mesh without texture is also provided with supple-                                                          acquisition [ZCS03]. Noisy boundaries should not
mentary material, illustrating good approximation                                                           be mapped onto each other from frame to frame, in
of the original surface data, several frames of that                                                        fact, their effect on the flow should be diminished
sequence are shown in Figure 2.                                                                             as much as possible. This would only be possible
                                                                                                            if the additional color information is provided, or
                                                L2 error of the human sequence
                                                                                                            if the processed surfaces have significant geometric
                                           0.0025
                                                                                                            detail. For the acquired data, the first option seems
              L2 error rel. to bbox diag




                                            0.002
                                                                                                            quite feasible.
                                           0.0015


                                            0.001


                                           0.0005                                                           6   Conclusions and future work
                                               0
                                                430     530     630
                                                               Frame number
                                                                            730         830     930
                                                                                                            We introduced an animated mesh extraction algo-
                                                                                                            rithm that can handle long sequences of evolving
                                               L2 error of the cactus sequence
                                                                                                            closed shapes and produce sequence of meshes of
                                           0.0004
                                                                                                            the same connectivity approximating the input sur-
         L2 error rel. to bbox diag




                                           0.0003                                                           face motion data. The resulting meshes can serve as
                                           0.0002
                                                                                                            approximations to the original shape motion, with
                                                                                                            advantage of having the same connectivity and tem-
                                           0.0001
                                                                                                            poral coherence. Such meshes can be represented
                                               0                                                            very compactly with the recent animated mesh com-
                                                    0     50          100         150         200
                                                               Frame number                                 pression approaches.
                                                                                                               There is much work remaining for handling
Figure 4: Left: plot of L2 error for the first 500                                                           shapes of changing topology, which would lead to
frames of the human sequence. We observe that                                                               dynamically changing topology of the mesh being
most of the error comes from the neck area. Right:                                                          fitted. Another important extension would involve
L2 error for the cactus jump. The error is mea-                                                             the handling of meshes with boundaries (often pro-
sured with the Windows version of the Metro pack-                                                           duced by the shape acquisition algorithms).
age [CRS98]; the reported relative L2 error is nor-                                                            Currently we propagate a single resolution ver-
malized with respect to the length of the bounding                                                          sion of the template through the whole sequence
box of the first frame.                                                                                      of frames. This assumes that all the significant
                                                                                                            features of the shape are present in its first frame.
                                                                                                            While this is true for the human motion data we
Limitations of the algorithm Our approach is                                                                considered, different data may require the adapta-
currently limited to closed surfaces without bound-                                                         tion of the mesh template to the geometry of all the
aries, which is a reasonable assumption when the                                                            shapes appearing in a sequence. In particular, ani-

                                                                                                      666
mated mesh sequence simplification is a very rele-              [KG]       K ARNI Z., G OTSMAN C.: Compression of
vant problem to be addressed in the future.                               soft-body animation sequences. To appear in
                                                                          Computers and Graphics, 2003.
                                                               [MFV98]    M ALANDAIN G., F ERNANDEZ -V IDAL S.:
Acknowledgments This work was supported in                                Euclidean skeletons. Image and Vision Com-
part by NSF (CCR-0133554). The authors would                              puting 16, 5 (1998), 317–327.
like to thank Lee Markosian for useful comments                [PSS01]                                      ¨
                                                                          P RAUN E., S WELDENS W., S CHR ODER P.:
and for the original cactus data. The original hu-                        Consistent mesh parameterizations. In Pro-
man jump data was provided by Peter Sand and Jo-                          ceedings of SIGGRAPH 2001 (2001).
van Popovic.                                                   [Set99]    S ETHIAN J. A.: Level Set Methods and Fast
                                                                          Marching Methods. Cambridge University
                                                                          Press, 1999.
References
                                                               [Sim93]    S IMONCELLI E.:        Bayesian multi-scale
[ACP02]                                     ´
           A LLEN B., C URLESS B., P OPOVI C Z.: Ar-                      differential optical flow. In Handbook of
           ticulated body deformation from range scan                     Computer Vision and Applications (1993),
           data. In Proceedings of SIGGRAPH 2002                          pp. 128–129.
           (2002).                                             [SMP03]    S AND P., M C M ILLAN L., P OPOVIC J.:
[ACP03]                                    ´
           A LLEN B., C URLESS B., P OPOVI C Z.: The                      Continuous capture of skin deformation.
           space of human body shapes: reconstruction                     ACM Transactions on Graphics 22, 3 (2003),
           and parameterization from range scans. In                      578–586.
           Proceedings of SIGGRAPH (2003), pp. 587–            [SS96]     S ZELISKI R., S HUM H.-Y.: Motion esti-
           594.                                                           mation with quadtree splines. IEEE Trans-
[BB81]     B.K.P.H ORN , B.G.S CHUNCK: Determin-                          actions on Pattern Analysis and Machine In-
           ing optical flow. Artificial Intelligence 17                     telligence 18, 12 (1996), 1199–1210.
           (1981), 185–203.                                    [VBSK00] V EDULA S., BAKER S., S EITZ S., K ANADE
                                                                        T.: Shape and motion carving in 6d. In Pro-
[BP98]     BAO H., P ENG Q.: Interactive 3d morphing.
                                                                        ceedings of the IEEE Conference on Com-
           Computer Graphics Forum (Eurographics 98
                                                                        puter Vision and Pattern Recognition (CVPR
           Proceedings) 17, 3 (1998), c23–c30.
                                                                        ’00) (2000).
[BSM∗ 03] B RICENO H. M., S ANDER P. V., M C M IL -            [WB98]     W HITAKER R. T., B REEN D. E.: Level-set
          LAN L., G ORTLER S., H OPPE H.:        Ge-                      models for the deformation of solid objects.
          ometry videos: a new representation for 3d                      In Implicit Surfaces 98 Proceedings (1998),
          animations. In Proc. of the 2003 ACM                            Jules Bloomenthal D. S., (Ed.), Eurograph-
          SIGGRAPH/EG Symp. on Comp. Animation                            ics/ACm Workshop, pp. 19–35.
          (2003), pp. 136–146.
                                                                                                        ¨
                                                               [WDSB00] W OOD Z., D ESBRUN M., S CHR ODER P.,
[CRS98]    C IGNONI P., ROCCHINI C., S COPIGNO R.:                      B REEN D.: Semi-regular mesh extraction
           Metro: Measuring error on simplified sur-                     from volumes. In Proceedings of Visualiza-
           faces. Computer Graphics Forum 17, 2                         tion 2000 (2000), pp. 275–282.
           (1998), 167–174.
                                                               [ZCS03]    Z HANG L., C URLESS B., S EITZ S. M.:
                                           ¨
[DMSB99] D ESBRUN M., M EYER M., S CHR ODER P.,                           Spacetime stereo: Shape recovery for dy-
         BARR A. H.:      Implicit fairing of irreg-                      namic scenes. In Proc. Computer Vision and
         ular meshes using diffusion and curvature                        Pattern Recognition (2003).
         flow. Proceedings of SIGGRAPH (1999),
         317–324.
[FPRJ00]   F RISKEN S. F., P ERRY R. N., ROCKWOOD
           A. P., J ONES T. R.: Adaptively sampled
           distance fields: A general representation of
           shape for computer graphics. In Proceedings
           of SIGGRAPH 2000 (2000).
[IR03]     I BARRIA L., ROSSIGNAC J.: Dynapack:
           space-time compression of the 3d animations
           of triangle meshes with fixed connectivity.
           In Proc. of the 2003 ACM SIGGRAPH/EG
           Symp. on Comp. Animation (2003), pp. 126–
           135.
[Jot]      Jot home page: http://jot.cs.princeton.edu.


                                                         666
                                                             Figure 6: Frames from the accompanying video
                                                             showing a mesh textured in the first frame, and ren-
                                                             dered in the consecutive frames of the extracted
                                                             animated mesh with the same texture assignment.
                                                             The overall positioning of the texture is similar to
Figure 5: Overview of the process. Top: original             the first frame. The neck area undergoes chaotic
meshes M(t−1), M(t), M(t+1), M(t+2); 2nd                     undulations in the original data causing excessive
row: one plane from ASDVs at level eight g(t − 1),           stretching on the top.
g(t), g(t + 1), g(t + 2); 3rd row: the flows v(t) and
v(t + 1), interpolated onto the corresponding sur-
faces; bottom: propagated template mesh at frames
t and t + 1.


                                                       666

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:1
posted:9/12/2012
language:English
pages:9