Using Genetic Algorithms for Real-Time Level
of Detail Approximation
May 7, 2003
In this paper I will be examining the possibility of using Genetic
Algorithms to create a real-time continuous Level of Detail system.
This system will simplify polygonal meshes using a ﬂoating cell dec-
imation technique and then use a type of evolutionary programming
to ’learn’ the best policy for displaying the mesh. This mesh can then
be displayed in real time.
Real-time graphics are becoming increasingly prevalent in our world. Com-
puter games, training simulations and medical imaging all rely on interactive
graphics. For the most part, complex geometric models comprising of meshes
of triangles are the backbone of such systems. Such models allow us to display
arbitrary model geometry in real time, but there is a signiﬁcant rendering
cost in drawing all those triangles. Reducing the number of triangles in
our models would allow us to render scenes faster and to render bigger and
more complex scenes interactively. For this project I attempted to create
a system using Level-of-Detail approximation and Genetic Algorithms that
would allow me to create low detail models interactively without signiﬁcant
loss of image ﬁdelity. For the purposes of this paper I will call my system
2 About LOD Systems
Level of detail (LOD) approximation aims to reduce the rendering time of
scenes containing triangle meshes by drawing complex meshes for models
that are close to the viewpoint and simple ones for models in the distance.
By adjusting the polygon count based on distance or screen space, we can
ensure that the loss of model detail will not result in a loss of image ﬁdelity.
LOD systems can be discrete or continuous. Discrete LOD is the simpler
of the two schemes. A discrete LOD system stores a number of meshes of
increasing complexity for each model. When a model needs to be rendered,
the system simply picks the appropriate mesh and draws it. Discrete systems
are easy to implement, but they have their drawbacks. The major drawback
is the ’popping’ eﬀect that occurs when a model crosses the threshold between
two detail levels and the model switches noticably from one mesh to another.
The diﬀerence between the meshes causes a noticeable visual artifact that
can be quite distracting. Continuous LOD systems get around this problem
by dynamically creating models at render time. This allows meshes of with
an arbitrary number of triangles to be created, meaning that there is never a
jump from one mesh to another, and subsequently no ’popping’. Continuous
LOD systems can also be view-dependent, meaning that the reduction of
detail is based on model orientation as well as distance. For this project I
created a continuous view-independent LOD system
2.1 Model Decimation
Most research in the ﬁeld of LOD systems revolves around the problem of
creating low detail models to display. These can be created by hand, but
it is far more desirable to have these models created automatically by some
algorithm. These algorithms generally take a mesh of high complexity and
reduce or ’decimate’ the mesh to create a mesh with fewer polygons that
retains as much of the original model’s shape as possible. The problem of
reducing models is an important area of research in computer graphics and
for this project, I was able to take advantage of some of this research. I chose
the Floating Cell Clustering technique as described by Low and Tan as the
basis for my decimation scheme.
Cell clustering classiﬁes each vertex in the mesh assigning a weight which
represents that vertex’ importance to the mesh shape. The vertices are then
sorted by weight, most important to least. The most important vertex is
then removed from the list and every other vertex within a given distance
from that vertex is found. The vertices within that distance make up the
contents of a cell. A weighted average vertex is calculated for a cell and that
new vertex is added to the mesh while the original vertices in the cell are
removed. This reduces the number of vertices in the mesh and thus reduces
the number of polygons to be drawn. This process is repeted until all of the
classiﬁed vertices have been removed and replaced by their representative
2.2 Creating the Mesh Hierarchies
The basic unit in my GA/LOD system is the mesh hierarchy. This object
is essentially a tree of vertices. The leaves of the tree represent vertices of
the original mesh, and the parents of that row represent the average vertex
for each cell. In order to create a deeper tree, we run the cell clustering
process on the decimated mesh and repeat until we are satisﬁed. When we
draw our model, we run through each of the polygons in the model and for
each vertex in a given polygon, we traverse up the tree from the bottom to
a suitable level to determine the vertex to use. To determine the level to
which we must traverse, we must decide for each vertex whether we should
collapse the vertex of not. If we decide to collapse, we move up to the vertex’
parent node and decide whether or not to collapse that vertex. We can repeat
this process until we have found the right level and draw the triangle using
that vertex. If any triangle has all three vertices collapse to the same parent
vertex, we do not need to draw that triangle.
We must make a decision at each node wether to collapse a vertex or
not. To do this I assign a number or weight to each node that represents the
priority of that vertex. At each rendering step we multiply the distance of
the model by the weight. If the result of this calculation is greater than a
certain threshold then we collapse the vertex, and not otherwise. This leaves
the problem of how to assign these weights.
3 About Genetic Algorithms
I decided to use a genetic algorithm (GA) approach to assigning weights.
GA is a passive reinforcement learning technique. They are ideally suited for
optimizing a set of values making them a natural choice for this application.
My GA/LOD system attempts to ’learn’ the correct—or at least acceptable—
values for our mesh.
Genetic Algorithms (GA) and Evolutionary Computation (EC) in general
attempt to solve problems using the Darwinian theory of species evolution.
They borrow the basic ideas of populations, individuals, breeding, and natural
selection from the ﬁeld of biology in order to try and mimic the success of
the evolutionary process.
EC techniques are a useful way of ﬁnding an solution in a problem domain
that is not fully understood, or are too complicated to be fully accounted for
directly. As Schwefel notes ”EC should be taken into consideration if and only
if classical methods for the problem at hand don’t exist, are not applicable,
or obviously fail.” Because they are slow, and not guaranteed to converge,
they are not appropriate in any domain for which there is a more analytical
The ﬁrst work in GA was done in 1958 by Friedberg, but it was not until
the 1960s and 1970 that they were applied to practical problems. Though
the terminology of this ﬁeld has been somewhat interchanged over the years,
Schwefel lists the three basic types of EC:
Genetic Algorithm GAs are used for optimization and use a string of bi-
nary digits to represent an individual’s genes. Recombination involves
copying one half of the binary string from one parent, and one half from
the other, split at one or more crossover point. This idea was founded
by J. Holland in the U.S. in the 1960s and 1970s. 
Evolution Strategies ESs are used for optimum ﬁnding and use real num-
ber genes. They were developed in Germany in the 1960s by pioneers
such as Schwefel and Rechenberg. 
Evolutionary Programming EPs were originally used to solve prediction
problems, and are now mainly used for numerical optimization. Al-
though EP developed separately from ES, the two techniques are in-
credibly similar. EP was originally developed by L. Fogel in the 1960s,
but they were mostly ignored until they were revived in the 1990s. 
The algorithm I used bears the closes resemblance to GAs but uses real
numbers for the gene set.
3.1 The Algorithm
Genetic Algorithms attempt to learn a function by emulating the process of
natural selection that guides evolution in the natural world. They do this by
creating a population of individuals, each with its own set of ’genes’. The
population is then tested to ﬁnd the ﬁtness of each individual. The ﬁttest
individuals are then selected for breeding while the least ﬁt ones are deleted.
The next population is then made of the children of the ﬁttest individuals.
Breeding two individuals means recombining their genes. This can be done
in any number of ways, but basically the goal is to split the set of genes
somehow and give the child half a set of genes from one parent and half from
the other. If we just do this, we risk having the population converge to a single
suboptimal solution. In order to discover new interesting gene combinations,
like in nature, we introduce random mutation into our breeding process.
The genes in an individual can represent any policy, behavior or attribute
that you wish to optimise. For my project I used the vertex node weights as
my set of genes. To breed my individuals, I went through the list of weights
and for each one I randomly chose one of the two parents to pass that gene
to the child. I added mutation by adding a random number to each weight
such that smaller changes are more likely than large changes.
3.2 The Evaluation Function
In order to select individuals, we need a function to determine their ﬁtness.
Since the results of our system our visual, we need a way to compare the im-
ages produced by the diﬀerent mesh display functions. To test an individual,
the mesh is drawn at a number of distances and the pictures are compared
to the undecimated mesh drawn at the same distance. There are a number
of methods of comparing two images. For ease and eﬃciency I chose to use
the root mean square error to compare these images. This method simply
gets the ’distance’ between the two images at each pixel. There are more
complicated and more eﬀective image comparison techniques that take into
account some of the complexities of visual perception. For this project, the
root mean square comparison was suﬃcient.
If we just use image error to determine an individual’s performance, we
run the risk of converging on a solution whereby no vertices are removed
at any distance. This solution is the optimal one given an image based
evaluation, as it would produce images that are virtually identical to the
original images. It would not be of much use, however, as we would not be
reducing the number of polygons drawn. In order to ensure that our genetic
algorithm learns a policy that minimizes both the number of faces drawn as
well as the image error, we need to factor the number of faces draw into our
evaluation function. I employed a weighted average of these two factors as
my GA ﬁtness function. By adjusting the weights in this sum, I can tune this
function to create meshes that favor fewer polygons (known as budget-based
LOD) or meshes that favor higher image quality (ﬁdelity-based LOD).
Figure 1 shows some results of running the algorithm over 1000 generations.
As you can see, the diﬀerences in these images are perceptible, but not overly
problematic. It is also apparent that the number of triangles used to render
the image decreases as the model recedes into the distance. This is the
expected and indeed the desired behavior for a LOD system.
It should be noted that while the LOD model does reduce the number
polygons drawn, it does not do so dramatically. This is because the ﬁtness
function ration I used for this experiment valued visual ﬁdelity much more
than polygon budget. If I were more interested in speeding up the render-
ing at the expense of image quality, I could adjust the evaluation function
5 Problems and Issues
While my experiments produced successful results, there were some problems
with this system.
5.1 Execution Time
The most signiﬁcant drawback to this approach is the expense of running
the genetic algorithm. For my simple cup model, running 100 generations
took approximately 10 minutes, and experimentation shows that over 500
generations are needed to gain acceptable results for the cup model. Figure
2 shows the learning curve for the GA/LOD system. Running the algorithm
for 1000 epochs took well over an hour making it one of the slowest mesh
decimation schemes available. Since all of the expensive processing is done
Figure 1: The decimated cup at various distances. The images on the left
are the unsimpliﬁed model.
Figure 2: The learning curve for the cup model using GA/LOD.
oﬄine, this may be an acceptable cost, especially if this technique can be
shown to provide better results, or allow more ﬂexibility than faster methods.
5.2 Mesh Artifacts
Though the image quality produced by the GA/LOD system is decent there
are some noticeable artifacts in the output models. This is mainly due to
the decimation scheme I used to create the mesh hierarchies. The ﬂoating
cell decimation method produces good results when the vertex weighting is
tuned for the model being decimated, but the quality diminishes signiﬁcantly
when it is applied to an already decimated mesh. The learning process
minimizes these artifacts by ﬁnding decimation policies that are deaccentiate
Figure 3: Some artifacts created by GA/LOD.
these errors, but it cannot get rid of them altogether. Figure 3 shows some
of the artifacts created by this technique.
5.3 Directionality of the Output
The output of this experiment optimized display of the front of the cup model,
but as Figure 4 shows, the other side of the cup looks extremely bad. This
is a problem in my demo, but is in fact a feature of the GA/LOD system.
My demo takes pictures of the cup from just one angle when performing its
evaluation. To make the cup viewable from all angles, one simply needs to
evaluate each individual from all angles. This will slow down the evaluation
function and will likely extend the time needed to converge, but does not
add signiﬁcantly to the complexity of the program.
6 Future Work
The results of this simple experiment are promising, but there is a lot more
that can be done to improve the GA/LOD system.
6.1 Better Decimation
As I mentioned before, the decimation scheme I used to implement this sys-
tem produces less than optimal results when applied to a mesh more than
once. Some work needs to be done to improve this step of the process. One
Figure 4: The rear view of the decimated cup model.
way to do this might be to use another discrete mesh decimation scheme in
the same manner as the ﬂoating cell scheme, but this is unlikely to produce
signiﬁcantly better results. There are a number of continuous detail mesh
representation schemes that would let us create a mesh hierarchy that meets
our needs without the unpleasant artifacts. Hoppes’ Progressive Meshes
are a prime candidate for exploration as are Schmalstieg and Schauﬂer’s
Smooth LODs. Improving the decimation scheme would not only improve
the results of my system, but would also probably allow it to converge sooner.
6.2 View Dependent LOD
The GA/LOD system is an inherently view-dependent system. This fact
can be seen in the cup model that has been optimise for only one viewing
angle. The view-dependency can be ignored by evaluating models from mul-
tiple angles and averaging the results, or with a little modiﬁcation, it can
be taken advantage of. View-dependent systems, though more complex than
view-independent ones, can oﬀer many advantages. Some of these advan-
tages include improved silhouette rendering, and removal of hidden faces. To
adapt the GA/LOD system to be view-dependent I would need to modify
the mesh hierarchies to take into account the orientation of the model as
well as its distance from the camera. This means that each ’gene’ would
need to be a vector of values rather than just a single one, and our simple
threshold function would become somewhat more complex. This would add
signiﬁcantly to the computational complexity of this solution, but should be
fairly easily to implement.
6.3 Textures and Other Perceptual Factors
Although the cup model used in this experiment was untextured, there is no
reason that it could not have had some sort of pattern or texture mapped
onto it. Surface textures can have an eﬀect on the perception of a 3D model,
and certain patterns can make the error in low detail models less obvious.
 This eﬀect, known as visual masking, can allow even greater reduction
of mesh complexity in LOD systems. Because the GA/LOD system uses a
visual evaluation function, the masking eﬀects of textures can be accounted
for without any modiﬁcation to the system. In fact, this system will take
advantage of any perceptual factor that can be picked up by the image com-
parison method. One of the major advantages of the GA/LOD method is
that it will capitalize on eﬀects such as texture masking without the need to
explicitly account for them. Using a more sophisticated comparison routine
would help to improve the visual accuracy of the output, but the expense of
such algorithms make this an impractical improvement.
Although this experiment is by no means conclusive, it does provide some
promising results. Though clearly not the most eﬃcient algorithm in the
world the ﬂexibility of GA/LOD makes it a potentially useful system for real-
time graphic systems. The domain-independent nature of genetic algorithms
make this a perfect solution in situations where the factors aﬀecting image
ﬁdelity are not fully understood or not easily enumerated. With a little
work and some more experimentation GA/LOD may become a practical
alternative to existing level of detail systems.
 John R. Koza. Genetic programming. In James G. Williams and Allen
Kent, editors, Encyclopedia of Computer Science and Technology, vol-
ume 39, pages 29–43. Marcel-Dekker, 1998.
 Hugues Hoppe. Progressive meshes. Computer Graphics, 30(Annual
Conference Series):99–108, 1996.
 R. M. Friedberg. A learning machine: Part (i), 1958.
 D. Schmalstieg. Schauﬂer: Smooth levels of detail, 1997.
 H. Schwefel. the evolution of evolutionary computation.
 T. Back, F. Hoﬀmeister, and H. Schwefel. A survey of evolution strate-
 Kok-Lim Low and Tiow-Seng Tan. Model simpliﬁcation using vertex-
clustering. In Proceedings of 1997 Symposium on Interactive 3D Graph-
ics, pages 75 –81, 1997.
 Darrell Whitley. An overview of evolutionary algorithms: Practical is-
sues and common pitfalls.
 T. Back, G. Rudolph, and H. Schwefel. Evolutionary programming and
evolution strategies: Similarities and diﬀerences.
 James A. Ferwerda, Sumanta N. Pattanaik, Peter Shirley, and Donald P.
Greenberg. A model of visual masking for computer graphics. Computer
Graphics, 31(Annual Conference Series):143–152, 1997.