Learning Center
Plans & pricing Sign in
Sign Out


VIEWS: 123 PAGES: 14

									Procedural Terrain Generation and Object Propagation

                    Skyler York


             Advisor: Dr. Norman Badler

                   April 10, 2007

       Computer and console video games have grown in size and complexity since the
       early days of Pong™ and Asteroids™, both in terms of the amount of technology
       present and the amount of content required to produce a successful modern title.
       However this increase in size and complexity has not been met with a comparable
       increase in development time. As a result, artists and designers face the challenge
       of producing an ever increasing amount of content in a relatively fixed amount of

       One potential solution to this problem is procedural content generation. This
       approach allows certain types of content to be generated algorithmically at real-
       time (or in the case of my project in an automated fashion at development time)
       using a minimal set of input so that artists and designers can spend the majority of
       their time on content that requires human ingenuity and creativity to be produced
       at an acceptable level of quality. My project focuses on ways in which suitable
       terrain can be generated for potential 3D worlds based on artist-supplied control
       settings, as well as how this terrain can be populated with scenery and game
       objects using a set of rules, restrictions, and object relationships supplied by the
       designers to create believable environments.

1. Related Work

Procedural generation as a process has been used in video games to a certain extent for years,
due to the limited amount of memory available on early game consoles and PCs. Early titles
such as The Sentinel™ claimed to have tens of thousands of unique levels stored in only a few
dozen kilobytes [3]. More recent endeavors into procedurally generated content such
.theprodukkt™ and .kkrieger™ are extreme examples of the power of procedural generation,
with each of them capable of generating entire 3D worlds, musical scores, textures, lighting, and
even limited first-person shooter gameplay mechanics; each of which requiring only 64 and 96
Kb of disk storage, respectively [4]. The best example of procedural processes being used in a
commercial game title is Spore™, where characters are animated procedurally based on their
anatomy, which is created by the player [5]. Unfortunately, not much more information is
available on how prevalent procedural processes will be in Spore™ due to the fact that it is still
in development at the time of this writing.

For the most part however, there hasn’t been widespread application of procedural processes to
games, due to the difficulty in developing procedural models and algorithms which generate
realistic, quality content. A lot of procedural methods rely heavily on pseudo-random number
generation, such as texture generation [1], or take advantage of patterns and self-similarity in
nature, such as L-system grammars [6]. There are even procedural methods for terrain
generation [2]. But all of these approaches lack the level of artist and designer control required
for application in games, where the goal is to leverage content quality and development time.

The procedural methodology used in my project to generate terrain and propagate objects is
based heavily on work done by Amburn et al [7]. Their procedural generation system consists of
multiple procedural methods running simultaneously in a lock-stepped fashion, each one
generating data. Each process can communicate with other processes using special
communication channels, which allows them to affect each other and change their behavior
based on information received from other processes. This further allows each process to
participate in mutual constraint satisfaction whenever there is a conflict. Communication paths
can be pruned and added so that each procedural process only knows about other processes that
could potentially be in direct conflict.

A prominent theme in [7] is the notion of subdivision. Amburn et al went so far as to abstract
the notion of subdivision beyond the obvious applications such as terrain as other geometric data,
into higher-level concepts which could be applied to any procedural generation process.
However for my purposes, subdivision only plays a role in the generation of terrain, and I chose
to stick to the conventional approach to subdivision that doesn’t involve abstract notions of
scripts, states, or transitions to different representations. Game objects are simply solid objects
in the world that don’t have multiple levels of detail or subdivision. They are generated in full
using stochastic processes and participate in constraint satisfaction with other objects and the

Level of subdivision also plays an important role in how the procedural processes interact with
each other, especially with regards to which processes have more control over the outcome of
constraint satisfaction, a characteristic Amburn et al refer to as “dominance”. For instance, an
object placed in the world would have to sit directly on top of the terrain so that it doesn’t appear
to float in the air or sink below ground level. Thus at a lower level of subdivision, the terrain
would be considered the dominant procedural process so that objects follow the lay of the land.
However at higher levels of subdivision, it’s important that the terrain doesn’t intersect the
object, especially at visible seams where the object meets the terrain (such as the bottom edges of
a cube). Thus at higher levels of subdivision, the objects dominate the terrain in order to
eliminate visual artifacts and provide more realistic transitions at higher levels of detail.

The solution I propose gives the artists and designers a set of comprehensive controls over how
terrain and objects are generated, while making use of pseudo-random values to add variety to
the final generated world. Ideally, each time the entire process is repeated with the same input
data, a different yet desirable result should be produced.

2. Technical Approach

One of the main goals of my design project, in addition to implementing a procedural terrain
generator and object propagator, was to design an overall framework for the entire generation
process. In other words, I tried to abstract the details of my particular implementation into a
general design that could possibly be implemented in other ways. For instance, I chose to
generate the terrain by stochastically subdividing triangles based on two-dimensional probability
density functions. However the real significance of this operation is that I can procedurally add
detail to the terrain while controlling where it occurs. Therefore any operation that can do as
much would be considered consistent with the overall framework.

To this end I have established a series of functional units that are responsible for different aspects
of the generation process, as well as the interfaces between these units that determine how they
interact. These units are the terrain generator, the object generator, the object-object constraint
satisfier, the object-terrain constraint satisfier, and the overall generation process controller.
Each of the following subsections will put forth both the design of each unit as well as the
particular implementation I chose and any challenges or difficulties I encountered. Finally I will
discuss the issue of user control, which is what allows the designer or artist to influence the
results of the process.

2.1. Terrain Generation

2.1.1 Design

The terrain generation unit is responsible for subdividing the terrain, which essentially adds
detail and resolution where it is most “needed.” In the complete absence of any objects on the
terrain, all areas of the terrain are considered equally as “needy” for detail, and thus the
subdivision procedure can choose any location on the terrain at random. However, if there are
objects on the terrain, then the regions in which the objects exist require more detail and
resolution, since objects modify the terrain in order to produce conditions that are most suitable
for that particular object (i.e. flat ground for buildings). A higher terrain resolution around
objects allows them to affect the terrain more locally, since smaller triangles have smaller area.
The terrain subdivision procedure should therefore prioritize terrain near objects for subdivision.
A quick note on terminology – a “free” or “unconstrained” vertex is one that can be moved up
and down while maintaining C0 continuity. Conversely, a “locked” or “constrained” vertex
would introduce holes in the terrain were it to be moved.

It is also possible for the terrain to interact with and modify objects. This is allowed so that
neither just the objects nor just the terrain dominate the generation process. The idea here is that
after a sufficient number of iterations in which the objects are “molding” the terrain and the
terrain is adjusting the objects, both will converge locally towards a somewhat steady state, in
which further iterations have negligible effects on the solution.

2.1.2 Implementation

The basic terrain generation process I implemented consists of stochastic subdivision of terrain
triangles. There are three primary features of the terrain, or any triangle mesh in general –
triangles, edges, and vertices. At the lowest level of subdivision and resolution, the terrain
consists of two large triangles forming a quad that spans the entire terrain field. It takes five
edges and four vertices to form these two triangles. As subdivision occurs, these numbers
increase. We consider vertices to be the most important feature however, since it’s the vertices
that form the edges, which form the triangles that make up the terrain. By moving vertices, you
can modify the terrain. You can also move entire edges or triangles; however that is just a result
of moving multiple vertices uniformly. What all this means is that the introduction of new,
unconstrained vertices is how we achieve the higher levels of subdivision and detail necessary to
change the appearance and shape of the terrain. However, given vertices also form edges and

triangles, it’s important to choose a stochastic subdivision process that doesn’t just add vertices
haphazardly at the expense of these other features.

First I will discuss my initial, flawed subdivision process (which has since been replaced), and
then describe how it led me to my current, more correct process. A random triangle was chosen
using relative projected triangle areas to determine relative selection probabilities, i.e. larger
triangles had a greater probability of being selected in direct proportion to their projected area
onto the XY plane (mathematically, the plane Z = 0). To do this, a location was chosen at
random over the entire terrain field, and the triangle underneath this random location was
selected. Then, a random vertex inside the selected triangle was generated, and new edges were
created connecting this new vertex to each of the existing vertices as shown in figure 2.1:

                                             Figure 2.1

This subdivision procedure was chosen due to its relative simplicity and the fact that a single free
vertex is added during each stage of subdivision, providing a direct correlation between the
amount of subdivision and the amount of terrain “flexibility” (which we can quantity as the
number of free vertices). However this process had a major flaw. Even though one new vertex,
three new edges, and three new triangles were being created during each subdivision step, there
was too much of a dependence on existing features, like the existing vertices and edges of the
subdivided triangle. As a result, unsightly artifacts invariably appeared – seaming due to
proliferation of long, thin triangles that all shared common vertices, unintended fractal patterns
due to too many common edges, etc

A better method was chosen after closely examining the structure of a typical terrain mesh,
shown in figure 2.2. One can either view it as a regular grid where each cell has been subdivided
to form a triangle, or we can see it as a hierarchical structure of nested triangles. This is more
apparent in figure 2.3 where internal triangles have been removed:

                     Figure 2.2                                    Figure 2.3

This second interpretation is what allowed me to create a more suitable stochastic subdivision
process based on the properties of an ideal terrain mesh. A random triangle is still selected as
before, however now it is subdivided as shown in figure 2.4:

                                            Figure 2.4

Three random points are generated somewhere along each green portion of an edge, causing a
split in the edge. The three “split vertices” are then connected to form a new triangle. As you
can imagine, it’s possible for an edge to already have been split by another subdivision (since an
edge can have a triangle on both sides). Thus if an edge has already been split then the existing
split point is used to simplify the maintenance of geometric validity.

Earlier I mentioned the notion of constrained and unconstrained vertices. Constrained vertices
cannot be manually moved by either the procedural terrain process or object interaction. Doing
so would violate the continuity of the terrain mesh, and as such they exist only to support the
other two features (triangles and edges). Unconstrained vertices on the other hand can be moved
up and down to change their height. Figure 2.5 demonstrates the difference:

                                             Figure 2.5

Vertices A and B are examples of free/unconstrained vertices, since they are connected to six
edges and six triangles, which allows them to move freely while not violating any important
properties of the mesh. Vertex C is a constrained vertex, since moving it around would introduce
a hole in the terrain and violate the planarity of the lower-left triangle. The subdivision of edges
produces what I like to call “dependent” vertices. In figure 2.5, the large edge down the middle
of the quad is split by vertex A into two smaller edges. One of these smaller edges is split by
vertex C. We consider vertex C to be dependent on vertex A, because while vertex C is
technically a constrained vertex that cannot move on its own, if vertex A were to move, both
smaller edges would move and it would require vertex C to be adjusted so that it remains on the
edge and surface continuity is maintained. If vertex C were to move, than any of its dependent
vertices (those which split the even smaller edges on both sides of vertex C) would have to be
adjusted. Vertex B has no dependent vertices. All this forms what I refer to as an “edge
hierarchy”, which is essentially a binary tree of edges split by vertices.

It is possible for vertices that were once constrained, to become unconstrained. For instance,
vertex C will become unconstrained when the lower-left triangle is subdivided. The terrain
generation process needs to do something with these vertices as they become unconstrained;
otherwise the terrain is subdivided without actually changing. What I do is generate a Perlin
noise heightmap over the entire terrain field that acts as a height target for unconstrained vertices
– that is, these vertices will attempt to conform to the height map once freed. During the
interaction process, when objects are attempting to mold the terrain, the terrain will attempt to
conform to the height map.

Finally, the location of where a new subdivision should occur is based on a discrete, two-
dimensional probability density function (PDF) that spans the entire terrain. This function can
be modified by the object system in order to determine the relative probability of subdivision in
different areas of the terrain.

2.1.3 Challenges

The principle challenge I faced while designing the terrain generation unit was choosing a
method of subdivision that simultaneously allowed me to increased the resolution and flexibility
of the terrain over successive iterations, chose where this increase in resolution occurred, allow
the terrain to converge towards a reasonable solution (such as a Perlin noise map), and minimize

the amount of artifacts overall. As previously mentioned, the first subdivision method I
employed was rather simple to comprehend and implement, but it introduced an unacceptable
amount of artifacts. The second method I chose was based on observations I made of a
uniformly subdivided quad, however due to the fact that I needed to make a distinction between
locked and unlocked vertices to maintain surface continuity, the flexibility of the terrain suffered
since only a portion of the total vertices could be adjusted at any given time. In addition to the
subdivision method itself, I also had to design an efficient data structure for finding the terrain
triangle that contains a given point. A linear-time search through all the triangles was out of
question for hundreds of thousands of triangles, let alone the fact that we would need to perform
a relatively expensive point-in-triangle check for each triangle. Instead, I settled on a modified
BSP implementation, which allows roughly logarithmic-time searching through the triangles
through a series of cheap point-plane tests.

2.2 Object Generation

2.2.1 Design

The object generation unit is responsible for generating objects on the terrain. There are a
number of different object types (think of C++ classes), each of which describes the behavior
and characteristics of all instances of that type. Each type of object can conceivably use a
different procedural process to determine how, when, and where it is created on the terrain
(member functions). Each object instance has a common set of properties, such as location, size,
and orientation that exist regardless of type (member variables).

2.2.2 Implementation

For my project, the geometry of objects generated is limited to rectangular prisms in order to
simplify and unify the interaction between all objects and the terrain. Each type of object will be
generated using separate procedural processes that can communicate with each other and the
terrain process. The object generation process consists of maintaining a two-dimensional PDF
for each object type, which determines where the object is created. The desired minimum,
maximum, and average number of instances (over the course of the entire generation procedure)
of each object type is also maintained, and a Poisson distribution is used to determine if an
instance of a particular object type should be created during each iteration of the generation
process. The objects communicate by being able to modify each other’s PDF values. In this
way, objects can modify how likely it is that certain other object types are generated at different
locations on the map.

To this end, a type information system needed to be implemented, since the object generation
unit is responsible for creating instances of objects based on such type information. Different
object types are registered with the system, along with the corresponding generation properties
(maximum instances, averages instances, etc.). It is also possible to specify subtypes to form an
object hierarchy. For example, a “house” type might be considered a subtype of a “city” type, if
we only allow houses to appear in cities. In this way an object hierarchy is formed, and the
generation process knows to create instances of subtypes only if their corresponding super-types
exist (i.e. don’t create houses unless cities exist to put them in).

The actual object types used in my implementation are “city”, “house”, “tree”, “business”, and
“hut”. Both houses and business are subtypes of cities. Huts are root-level objects like cities,
since they exist on their own.

2.2.3 Challenges

The biggest challenge was designing a system that could both store information about types
(metadata about objects), while also being able to generate instances of these types. This was
further complicated by the fact that types could have subtypes and super-types that not only
complicate the type information, but also the generation procedure. Also, instances of objects
needed to be what type they were an instance of, and type information needed to be aware of all
its instances for quick lookup. I ultimately settled on a system that allowed a name string to be
registered and assigned a unique integer ID. Along with the name, a factory method was
supplied. Thus I was able to use a factory design pattern to allow the type system to generate
and identity instances (and conversely, for instances to identify their type). The name allows
types to be easily and uniquely identified throughout all the units.

2.3 Object-to-Object Satisfaction

2.3.1 Design

The object-to-object satisfaction unit is responsible satisfying the constraints between objects
with regard to their relationships. A “relationship” between two objects establishes a constraint
to which the data of these objects must adhere. One possible example could be an “attractive”
relationship, where two objects are attracted to one another. The implied constraint is that the
distance between the two objects be minimized. The “repulsive” relationship would push objects
apart, establishing a constraint that maximized distance. An “alignment” relationship would
establish the constraint that two objects should have the same alignment (orientation).
Relationships between objects can be created at the type level, or the instance level. In other
words, I can either say that all objects of a certain type have a particular relationship will all
objects of another type, or that only two particular instances share a certain relationship.

The satisfaction process attempts to satisfy the constraints by applying the relationships.
However, just how “strongly” the relationships are applied is determined by a notion known as
dominance. If one object was to be absolutely attracted to another object, then there’s nothing
stopping them from eventually colliding, and converging towards the same location. However, if
we vary the dominance as a linear function of distance say, then the further the objects are apart,
the faster one moves toward the other. Once their distance falls below a certain threshold, we set
the dominance to zero, which keeps the objects from colliding.

It’s important to note that a relationship is one-way. If I say object A is attracted to object B, that
does not imply that object B is attracted to object A.

2.3.2 Implementation

For this project I only implemented three types of object relationships, which are the three
mentioned as examples in the design (attraction, repulsion, and alignment). Each iteration of the
generation process, the satisfaction unit iterates all supplied object constraints and applied each
one sequentially. If a relationship is between two types, then the algorithm iterates all instances
of one type, and applies the relationship to all instances of the other type. Otherwise, it applies
the relationship to the supplied pairs of instances.

The attraction relationship moves one object towards another in the 2D plane (since height
modification is handled by terrain interaction). Dominance varies as a linear function of
distance. If the distance between the two objects drops below a certain threshold, dominance is
set to zero so that the objects don’t collide. The dominance also maxes out if the objects are
greater than a certain distance apart, to keep the attracted one from moving too quickly. The
repulsion relationship works the same as the attraction relationship, except it moves one object
away from the other, and dominance is set to zero once their distance becomes greater than some

2.4 Object-to-Terrain Satisfaction

2.4.1 Design

The object-to-terrain satisfaction unit determines how objects interact with the terrain. At the
most basic level, the only thing the objects can do to the terrain is raise or lower unconstrained
vertices, and the only thing the terrain can do to objects is change their orientation or move then
up and down (we assume lateral movement of objects is handled by the object-to-object
satisfaction unit). Objects will attempt to modify the terrain underneath them to make it
“compatible” with the object (i.e. flat terrain under a flat object surface), while the terrain will
attempt to modify the object in order for it to adhere to general terrain features (i.e. same slope as
terrain, doesn’t intersect the ground). The concept of dominance plays a role here as well. When
the terrain modifies an object, larger triangles have a more dominant effect on the objects since
they occupy more area. Smaller triangles thus have a lower dominance, since they occupy a
smaller area, and it’s possible that dozens of small triangles could exist under a single object.
Along the same lines, the more triangles underneath an object, the more dominance the object
has over each triangle. By simultaneous allowing objects to subdivide terrain and modify
smaller triangles, and allowing larger triangles more influence over the height and slope of an
object, both should hopefully converge towards a local solution. Objects can either interact with
the terrain first, and the terrain can interact with the objects second, or you could also perform
those steps in the reverse order.

2.4.2 Implementation

I implemented a single method for how objects could interact with the terrain. Since objects are
rectangular prisms, they will have a flat bottom surface which could be considered the “bottom”
of the object. Each object has the ability to move free vertices of triangles that are located
underneath it, towards its bottom surface. Therefore, after sufficient iterations, the triangles

underneath the object will align with the bottom surface of the object. Of course, the object itself
can be moving, in which case these triangles could return to their previous positions. In this
way, moving objects don’t have a “lasting” effect on the terrain they don’t eventually settle on.

Terrain triangles affect objects a little differently. Each object maintains a list of triangles
underneath it, which is updated as the object moves. These triangles determine their relative
dominance over an object based on how much surface area of the object’s bottom surface they
occupy (based on the area projected onto the XY-plane). When these triangles attempt to move
toward their target heights (such as those in a Perlin map), the dominance will determine how
much they move in addition to how much they move the object up or down. Clearly, larger
triangles will have a greater dominance and encounter less resistance, versus smaller triangle
which will barely move as the objects will dominate.

2.4.3 Challenges

When I designed the object-to-terrain satisfaction unit, I wanted to stay as abstract as possible,
which is why the design seems a bit vague. The reason being, that the way in which objects and
terrain interact depends heavily on how the terrain is created and subdivided, hence it wouldn’t
make sense to include in my design anything that relies on a particular terrain implementation.

It was also a challenge coming up with acceptable means by which the object and terrain could
affect one another. The obvious solution for objects that need to affect the terrain is to simply
move vertices upwards towards a surface of the object, which is what I have implemented.
However, I also attempted to implement an algorithm that “rotated” a triangle towards a target
normal, subject to the constraint that only the free vertices of the triangle could move, and only
along the Z-axis. However this proved to be more difficult than it seemed at first, and I had to
eventually drop this interaction between it didn’t work.

2.5 Overall Generation Control

2.5.1 Design

The overall generation control unit is responsible for tying all the other units together. It
determines when and how often each unit is allowed to perform a single iteration. For instance,
in a single iteration the terrain generation unit could generate a few hundred subdivision, the
object generation unit could generate a few objects, the object-to-object constraint satisfy could
apply relationships to existing objects, and the object-to-terrain system could allow the terrain
and the objects to modify each other. It’s important that each iteration only have a minor effect
on both terrain and objects, since the idea is that after many iterations we will approach an
acceptable solution. Key to the design is that both terrain and objects are generated in lock-step.
To generate all the terrain at once, and then all the objects at once, would completely miss the
point of the whole process.

2.5.2 Implementation

My implementation is rather straightforward. For each iteration of the generation control, it runs
several hundred iterations of the terrain generation unit (where each iteration generates a single
subdivision), followed by a single iteration of each other unit sequentially, in this order – object
generation, object-to-object satisfaction, object-to-terrain satisfaction.

2.6 User Control

The goal of my project from the start was to design a procedural framework for generating
terrain and propagating objects, and as such designing and implementing the core framework
was a priority. User control wasn’t implemented in this prototype, in the sense that a user
couldn’t actually approach the application and enter values that affect the results. However, this
is not to say that these inputs into the procedural process don’t exist. For instance, in my
implementation I opted to use a Perlin noise map as the terrain target for vertices. This map
doesn’t have to be procedurally generated, it could very well be topography designed by an
artist. Second, the user has complete control over the types of objects that exist, what their
properties are, and what their relationships are to other objects. In this prototype there were only
three relationships, but it’s not impossible to imagine that many more relationships could be
added to produce a rather comprehensive means of expressing object relationships. The same
applies for adding addition object-terrain interactions.

4. Conclusion

From the very beginning of the project, there were no guarantees that my design would lead to an
implementation that generated good or acceptable results. There was a lot of background
information on procedural terrain generation, and just procedural methods in general, but [7] was
the only reference I could find that even hinted at a means of incorporating some sort of object
propagation layer in addition that actually interacted with the terrain. Right away I knew that I
had a challenge on my hands, but that it probably wasn’t beyond what I could handle.

That being said, the results of my implementation weren’t what I expected. My design is based
on the simple notion that if you take a bunch of more or less random processes and add in several
layers of controls and constraints, that you would eventually reach an acceptable solution. While
I still believe that this is true given enough constraints, my results clearly indicated that I had
underestimated just how difficult it is to reign in a collection of random process and get them to
produce something that would be considered acceptable to a human with even the most basic
aesthetic sensibilities.

Each unit performed as it was designed and implemented. The terrain generation unit correctly
subdivided the terrain more often closer to objects, and correctly marked vertices as constrained
or unconstrained (aside from a few edge case bugs with really small triangle that I have yet to
locate). The object generation unit correctly generated objects according to how and where they
should be generated. The object-to-object constraint satisfier correctly applied relationships to
objects. The object-to-terrain constraint satisfier correctly allowed the objects to modify terrain,
and for the terrain to modify objects. However, the combination of these units on each other did
not have a synergistic effect as I had thought they would.

For instance, I first observed that the terrain generation system generated extremely spiky terrain.
My initial response was that this was normal, and that after a sufficient level of subdivision the
spikes would flatten out and the terrain would converge toward the smooth Perlin map.
However, it turns out in the absence of an extremely high level of subdivision, this spiky
appearance doesn’t vanish. I resigned myself to this fact, since that could always be fixed by
smoothing in a post-process pass. I also figured that the terrain interaction process would
smooth terrain around objects, so I wouldn’t have to worry there. However, due to the
irregularity of the terrain triangles (considering they’re all different sizes and shapes with
practically no axis-alignment), the triangles that were selected by the object to smooth formed a
highly irregular outline around the object. Thus while the terrain underneath the object was
smoothed, so was a very jagged outline around the object. Again, in the absence of extremely
high subdivision this produced a noticeable artifact.

Another problem I noticed was that in an effort to locally satisfy each constraint sequentially, I
was ignoring the possibility of conflicting constraints that wouldn’t converse. For example, if
object A is attracted to object B, but object B is repulsed from object A, then this would lead to a
cat-and-mouse situation that would cause both objects to run off toward infinity.

In light of all this, I believe that what I have achieved is a stepping stone for further work and
study on the subject. My current prototype didn’t generate the results I had expected it would,
but I believe that this is simply due to there being not enough constraints, and an incorrect
application of existing constraints. The first thing I would do is switch to a uniformly pre-
subdivided terrain akin to figure 2.2. The procedural terrain is what seemed to have caused a lot
of the aesthetic issues. The second thing I would do is add more global awareness to the design.
Right now, each unit blindly pursues its own goals in the belief that together they will somehow
produce a good result. However they ignore any negative impacts that have on other units, and
they also ignore any negative impacts they have on themselves (i.e. the cat-and-mouse object
scenario). Thus I would add extra constraints to control the overall global solution as it forms.

I’ve come to conclude that it takes a lot of time and effort in order to produce a system in which
multiple procedural processes can be combined and constrained in order to produce acceptable
results, however that with enough constraints and controls it should still be possible.

5. References
[1] Dean Marci, Kim Pallister, Procedural 3D Content Generation – Part 1 of 2, 2006.

[2] Dean Marci, Kim Pallister, Procedural 3D Content Generation – Part 2 of 2, 2006.

[3] Wikipedia, Procedural Generation, 2006.

[4] .theprodukkt, .kkrieger, 2004.

[5] Wikipedia, Spore, 2006.

[6] Wikipedia, L-system, 2006.

[7] Phil Amburn, Eric Grant, Turner Whitted, “Managing Geometric Complexity with Enhanced
Procedural Models”, ACM SIGGRAPH Computer Graphics, v.20 n.4, p.189 – 195, Aug. 1985

[8] Perlin Noise


To top