Docstoc

Cartographic Feature Extraction

Document Sample
Cartographic Feature Extraction Powered By Docstoc
					                      Cartographic Feature Extraction
                                   George Vosselman

                               Delft University of Technology
                       Faculty of Civil Engineering and Geosciences
                               Thijsseweg 11, 2629 JA Delft
                                      The Netherlands
                           e-mail: g.vosselman@geo.tudelft.nl

1. Introduction

Of all tasks in photogrammetry the extraction of cartographic features is the most time
consuming. Since the introduction of digital photogrammetry much attention therefore has
been paid to the development of tools for a more efficient acquisition of cartographic
features. Fully automatic acquisition of features like roads and buildings, however,
appears to be very difficult and may even be impossible. The extraction of cartographic
features from digital aerial imagery requires interpretation of this imagery. The knowledge
one needs about the topographic objects and their appearances in aerial images in order
to recognise these objects and extract the relevant object outlines is difficult to model and
to implement in computer algorithms. Therefore, only limited success has been obtain in
developing automatic cartographic feature extraction procedures. Human operators
appear to be indispensable for a reliable interpretation of aerial images.

Still, computer algorithms can contribute significantly to the improvement of the efficiency
of feature extraction from aerial imagery. Whereas human operators are better in
interpretation, computer algorithms often outperform operators in case of specific
measurement tasks. So-called semi-automatic procedures therefore combine the
interpretation skills of the operator with the measurement speed of a computer.

This paper reviews the most common strategies for semi-automatic cartographic feature
extraction from aerial imagery. In several strategies knowledge about the features to be
extracted can easily be incorporated into the measurement part perform by a computer
algorithm. Some examples of the usage of such knowledge will be described in the
discussion at the end of this paper.


2. Semi-automatic feature extraction

Semi-automatic feature extraction is an interactive process between an operator and one
or more computer algorithms. To initiate the process, the operator interprets the image
and decides which features are to be measured and which algorithms are to be used for
this task. By positioning the mouse cursor the approximate location of a feature is pointed
out to the algorithm. If required the operator also may tune some of the algorithm’s
parameters and select an object model for the current feature. Semi-automatic feature
extraction algorithms have been developed for measuring primitive features such as
points, lines and regions, but also for more complex, often parameterized, objects.

2.1 Extraction of points

Semi-automatic measurement of points is used for measuring height points as well as for
measuring specific object corners. The first case is usually known as a “cursor on the
ground” utility, which is available in several commercial digital photogrammetric
workstations. Here, the operator positions the cursor at some XY-position in a
stereoscopic model, whereas the terrain height at this position is determined
automatically by matching patches of the stereo image pair. After this determination the
3D cursor snaps to the local terrain surface. Thus, the operator is relieved from a precise
stereoscopic measurement and can therefore increase the speed of data acquisition.

The second type of point measure algorithms is used to make the cursor snap to a
specific object corner. These algorithms can be used for monoplotting as well as for
stereoplotting. For monoplotting the operator approximately indicates the location of an
object corner to be measured. The image patch around this approximate point will usually
contain grey value gradients caused by the edges of the object. By applying an interest
operator (see e.g. [Förstner and Gülch, 1987]) to this patch the location of the object
corner can be determined. Thus, such utilities can make the cursor snap to the nearest
point of interest. When using the same principle for stereoplotting, the operator has to
supply an approximate 3D position of the object corner. The interest operator can then be
applied to both stereo images, whereas the estimated 3D corner position will be
constrained by the known epipolar geometry. For the measurement of house roof
corners, this procedure was reported to double the speed of data acquisition and reduce
the operator fatigue [Firestone et al., 1996].




Figure 1.: Left: Approximate house corners outlined by an operator.
           Right: House corners after snapping to the nearest interesting point
2.2 Extraction of lines

The extraction of lines from digital images has been a topic of research for many years in
the area of computer vision [Rosenfeld, 1969, Hueckel, 1971, Davis, 1975, Canny, 1986].
First attempts to extract linear features from digital aerial and space imagery were
reported in [Bajcsy and Tavakoli, 1976, Nagao and Matsuyama, 1980]. Semi-automatic
algorithms have been developed for the extraction of roads. These algorithms can be
classified into two categories: algorithms using deformable templates and road trackers.

2.2.1 Deformable templates

Before starting algorithms using deformable templates the operator needs to provide the
approximate outline of the road. This initial template of the road is usually represented by
a polygon with a few nodes near to the road to be measured. The task of the algorithm is
to refine the initial template to a new polygon with many more nodes that accurately
outline the road edges or the road centre (depending on the road model used). This is
achieved by deforming the template such that a combination of two criteria is optimised:
the template should coincide with image pixels with high grey value gradients and the
shape of the template should be relatively smooth. The latter criterion is often
accomplished by constraining the (first and) second derivatives of the template. This
constraint is needed for regularisation but is also leading to more likely outline results,
since road shapes generally are quite smooth.

Most algorithms of this kind are based on so-called snakes [Kass et al., 1988]. The
snakes approach uses an energy function in which the two optimisation objectives are
combined. After computing the energy gradients due to changes in the positions of the
polygon nodes the optimal direction for the template deformation can be found by solving
a set of differential equations. In an iterative process the polygon nodes are shifted in this
optimal direction. The resulting behaviour of the template looks like that of a moving
snake, hence the name.

Whereas snakes were initially formulated for optimally outlining linear features in a single
image, they can also be used to outline a feature in 3D object space by combining grey
value gradients from multiple images together with the exterior orientation of these
images [Trinder and Li, 1995, Neuenschwander et al., 1995]. This snakes approach has
also been extended to outline both sides of a road simultaneously. More research is
conducted to further improve the efficiency of mapping with snakes by reducing the
requirements on the precision of the initial template provided by the operator and by
incorporating scene knowledge into the template deformation process [Neuenschwander
et al., 1995, Fua, 1996].
Figure 2.: Outlining of road sides by a snake algorithm. The images at the left side contain
the initial template as given by an operator. During the iterations (images to the right) the
template is fitted to the road sides. (Adopted from [Neuenschwander et al., 1995].)

2.2.2 Road trackers

In the case of snakes, the operator needs to provide a rough outline of the complete road
to be measured. In contrast, the input for road trackers only consists of a small road
segment outlined by the operator. The purpose of the road tracker is then to find the
adjacent parts of the road.

Most road trackers are based on matching grey value profiles [McKeown and Denlinger,
1988, Quam and Strat, 1991, Vosselman and Knecht, 1995]. Based on the initial road
segment outlined by the operator, a characteristic grey value profile of the road is derived.
Furthermore, the local direction and curvature of the road is estimated. This estimation is
used to predict the position of the road at some step size after the initial road segment. At
this position and perpendicular to the predicted road direction at this position a grey value
profile is extracted from the image. By matching this profile with the characteristic road
profile a shift between the two profiles can be determined. Based on this shift, an
estimate for the road position along the extracted profile is obtained. By incorporating
previously estimated positions, other road parameters like the road direction and the road
curvature can also be updated. The updated road parameters can then be used to make
a next prediction of the road position at some step size further along the road. This
recursive process of prediction, measurement by profile matching and updating the road
parameters can be implemented elegantly in a Kalman filter [Vosselman and Knecht,
1995].

The road tracking continues until the profile matching fails at several consecutive
predicted positions, i.e. it stops when the several extracted profiles show little
correspondence with the characteristic grey value profile. Some characteristic results are
shown in figure 3. Matching failures can often be explained by trees along the road or
road crossings and junctions. Due to these objects the grey value profiles extracted at
those positions deviate substantially from the characteristic profile. By making predictions
with increasing step sizes, the road tracker is often able to jump over these kind of
obstacles and continue the outlining of the road.




Figure 3: Left: Original image. Right: Image overlaid with results of a road tracker. Black
road segment are indicated by an operator. White road segments have been measured
successfully. Grey road segments are based on prediction only since the measurement
failed at those positions.

2.3 Extraction of areas

Due to the lack of modelled knowledge about objects, the computer supported extraction
of area features is more of less limited to areas that are homogeneous with respect to
some attribute. Of course, in images the most common attributes to look at are the pixel’s
grey value, colour and texture attributes. The extraction of objects like water areas and
house roofs can be facilitated by algorithms that extract homogeneous grey value areas.
The most common approach is to let the operator indicate a point on the homogeneous
object surface and let an algorithm find the outlines of that surface.
An example can be seen in figure 4. It is clear that the results of such an algorithm still
require some editing by an operator. Overhanging trees at the left side of the river and
trees that cast dark shadows at the right side of the river cause differences between the
bounds of the homogeneous area and the river borders as they should be mapped.
Similar differences will also arise when using these techniques to extract building roofs.
Most objects are not homogeneous enough to allow a perfect delineation. Still, the
majority of the lines to be mapped may be at the correct place. Thus, editing the results of
such an area feature extraction will often be faster than a complete manual mapping
process. Firestone et al. [1996] report the use this technique for mapping lake shores.
Especially for small scale mapping this can be very efficient since the water surface
generally appears homogeneous and the disturbing effects of trees along the shoreline,
as in the example, may be negligible at small scale.

The algorithms used to find the boundaries of a homogeneous area are usually based on
the region growing algorithm [Haralick and Shapiro, 1992]. Starting at the pixel indicated
by the operator, this algorithm checks whether an adjacent pixel has similar attributes
(e.g. grey value). If the difference is below some threshold, the two pixels are merged to
one area. Next, the attributes of another pixel adjacent to this area are examined and this
pixel is also merged with the area if the attribute differences are small. In this way a
homogeneous area is grown pixel by pixel. This process is repeated until all pixels that
are adjacent to the grown area have significantly different attributes.




Figure 4.: Left: River surface manually indicated by a single point. Right: River borders
found by region growing (and a little mathematical morphology for closing small gaps).

2.4 Extraction of complex objects

As requirements to geographical data are shifting from 2D to 3D and from vector data to
object oriented data the acquisition of these data with digital photogrammetry is also
increasingly three-dimensional and object based. In particular for the acquisition of 3D
objects like buildings and other highly structured objects the usage of object models can
be beneficiary. These models contain the topology and the internal geometrical
constraints of the object. The usage of these models relieves the operator from specifying
these data within the measurement process and will improve the robustness and
precision of the data acquisition.

A common interactive approach is illustrated in figure 5. After the selection of an
appropriate object model by an operator, the operator approximately aligns the object
model with the image (left image). In a second step a fitting algorithm is used to find the
best correspondence between the edges of the object model and the location of high
gradients in the image (middle image). Especially in presence of neighbouring edges with
high contrast (like the windows on the house front in the example) the resulting fit does
often not correspond to the desired result and therefore requires one or more additional
corrective measurements by the operator (right image).

Different approaches are being used to find the optimal alignment of the object model to
the image. Fua [1996] extended the above described snake algorithm for fitting object
models. The energy function is defined as a function of the sum of the grey value
gradients along the model edges. Derivatives of this energy function with respect to
changes in the co-ordinates of the object corners determine the optimal direction for
changes in these co-ordinates, whereas constraints on the co-ordinates ensure that a
valid building model with parallel and rectangular edges is maintained.

Lowe [1991] and Lang and Schickler [1993] use parametric object descriptions and
determine the optimal parameter values by fitting the object edges to edge pixels (pixels
with high grey value gradients) and extracted linear edges respectively. Veldhuis [1998]
analysed the approaches of Fua [1996] and Lowe [1991] with respect to suitability for
mapping.




Figure 5.: Semi-automatic model based feature extraction. Left: Object model
approximately positioned by an operator. Middle: Result of a fitting algorithm.
Right: Fitting result corrected by a manual measurement.
3. Discussion

Semi-automatic measurement techniques as reviewed in this paper surely improve the
efficiency of cartographic feature extraction. In most cases there is a clear interaction
between the human operator and one or more measurement algorithms. Prior to the
measurement the task of the operator is to identify the object to be measured, to select
the correct object model and algorithm and to provide approximate values. After the
measurement by the computer the operator needs to correct part of the measurements,
since the delineation resulting from the objective of the measurement algorithm often
does not correspond with the desired object boundaries.

Robustness as well as precision of the semi-automatic measurements can be improved
by incorporating knowledge about the topographic features into the measurement
process. A clear example of this was already shown for the case of complex object
measurement. Further knowledge can be knowledge can be added in the form of
constraints between neighbouring houses and roads. Hwang et al. [1986] e.g. uses the
fact that most houses are parallel to a road and that houses are often connected to a
road by a driveway.

In the case of linear features many more heuristics can be used to guide the feature
extraction. Cleynenbreugel et al. [1990] notice that roads usually have no steep slopes
and that, therefore, digital elevation models can be useful for road extraction in
mountainous areas. Furthermore they notice that the road patterns are often typical for
the type of landscape (mountainous, flat rural, urban). Soft bounds on the usually low
curvature of principal roads are used in the road tracker described in [Vosselman and
Knecht, 1995].

Useful properties of water surfaces are related to height. Fua [1996] extracts rivers as 3D
linear features and imposes the constraint that the height of the river decreases
monotonously. Furthermore, when lakes are extracted as 3D surfaces they can often be
assumed to be horizontal. The latter constraint can be used to automatically detect
delineation errors caused by occluding trees along the lake shore.

To obtain a higher degree of automation by interpretation of the aerial image by computer
algorithms much more knowledge is to be modelled. Knowledge based interpretation of
aerial images and the usage of existing GIS databases within this process is a topic of
current research efforts [Kraus and Waldhäusl, 1996, Gruen et al., 1997].


References

Bajcsy, R. and M. Tavakoli [1976]: Computer Recognition of Roads from Satellite
Pictures. IEEE Transactions on Systems, Man and Cybernetics, vol. 6, no. 9, pp. 623-
637.
Canny, J.F. [1986]: A Computational Approach to Edge Detection. IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 8, pp. 679-698.

Cleynenbreugel, J. van, F. Fierens, P. Suetens, and A. Oosterlink [1990]: Delineating
Road Structures on Satellite Imagery by a GIS-Guided Technique. Photogrammetric
Engineering and Remote Sensing, vol. 56, no. 6, pp. 893-898.

Davis, L.S. [1975]: A Survey of Edge Detection Techniques. Computer Graphics and
Image Processing, vol. 4, pp. 248-270.

Firestone, L., S. Rupert, J. Olson, and W. Mueller [1996]: Automated Feature Extraction:
The Key to Future Productivity, Photogrammetric Engineering and Remote Sensing, vol.
62, no 6, pp. 671-674.

Förstner, W. and E. Gülch [1987]: A Fast Operator for Detection and Precise Location of
Distinct Points, Corners and Centers of Circular Features. In: Proceedings ISPRS Inter-
commission Workshop on "Fast Processing of Photogrammetric Data", Interlaken, June
1987.

Fua, P. [1996]: Model-based Optimization: Accurate and Consistent Site Modeling,
International Archives for Photogrammetry and Remote Sensing, vol. 31, part B3, pp.
222-233

Gruen, A., E.P. Baltsavias and O. Henricsson (Eds.) [1997]: Automatic Extraction of Man-
Made Objects from Aerial and Space Images (II), Birkhäuser Verlag.

Haralick, R.M. and L.G. Shapiro [1992]: Computer and Robot Vision. Vol. 1, Addison-
Wesley, Reading, MA.

Hueckel, M.H. [1971]: An Operator Which Locates Edges in Digitized Pictures. Journal of
the Association for Computing Machinery, vol. 18, no. 1, pp. 113-125.

Hwang, V.S.-S., L.S. Davis, and T. Matsuyama [1986]: Hypothesis Integration in Image
Understanding Systems. Computer vision, Graphics and Image Processing, vol. 36, pp.
321-371.

Kass, M., A. Witkin, and D. Terzopoulos [1988]: Snakes: Active Contour Models.
International Journal of Computer Vision, vol. 1, pp. 321-331.

Kraus, K. and P. Waldhäusl (Eds.) [1996]: International Archives of Photogrammetry and
Remote Sensing, vol. 31, part B3.

Lang, F. and W. Förstner [1996]: 3D-City Modelling with a Digital One-Eye Stereo
System. International Archives of Photogrammetry and Remote Sensing, vol. 31, part B3,
pp. 415-420.
Lang, F. and W. Schickler [1993]: Semi-automatische 3D-Gebäudeerfassung aus
digitalen Bildern. Zeitschrift für Photogrammetrie und Fernerkundung, vol. 61, no. 5,
pp.193-200.

Lowe, D. [1991]: Fitting Parameterized Three-Dimensional Models to Images. IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 5, pp. 441-450.

McKeown, D.M. and J.L. Denlinger [1988]: Cooperative Methods for Road Tracing in
Aerial Imagery. IEEE Proceedings on Computer Vision and Pattern Recognition, Ann
Arbor, MI, pp. 662-672.

Nagao, M. and T. Matsuyama [1980]: A Structural Analysis of Complex Aerial
Photographs. In: Nadler, M. (Ed.), Advanced Applications in Pattern Recognition, Plenum
Press, New York, vol. 1, pp. 1-199.

Neuenschwander, W., P. Fua, G. Székely and O. Kübler [1995]: From Ziplock Snakes to
Velcro™ Surfaces. Ascona Workshop on Automatic Extraction of Man-Made Objects from
Aerial and Space Images, Birkhäuser Verlag, pp. 105-114.

Quam, L.H. and T.M. Strat [1991]: SRI Image Understanding Research in Cartographic
Feature Extraction. In: Ebner et al. (Ed.), Digital Photogrammetric Systems, Wichmann
Verlag, Karlsruhe, pp. 111-122.

Rosenfeld, A. [1969]: Picture processing by computer. Computational Surveys, vol. 1, no.
3, pp. 147-176.

Trinder, J. and H. Li [1995]: Semi-Automatic Feature Extraction by Snakes. Ascona
Workshop on Automatic Extraction of Man-Made Objects from Aerial and Space Images,
Birkhäuser Verlag, pp. 95-104.

Veldhuis, H. [1998]: Performance Analysis of Two Fitting Algorithms for the Measurement
of Parameterised Objects. International Archives of Photogrammetry and Remote
Sensing, vol. 32, part B3, submitted.

Vosselman, G. and J. de Knecht [1995]: Road Tracing by Profile Matching and Kalman
Filtering. Ascona Workshop on Automatic Extraction of Man-Made Objects from Aerial
and Space Images, Birkhäuser Verlag, pp. 265-274.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:5
posted:9/27/2011
language:English
pages:10