Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

AI to Reduce Micromanagement in Real-Time Strategy Game

VIEWS: 28 PAGES: 72

									AI to Reduce Micromanagement in
     Real-Time Strategy Game




        AI to Reduce Micro-Management in
             Real-Time Strategy Games
                       Author: Keith Rogers
                      Student No: 15119563
                    Supervisor: Andrew Skabar
     Degree: Bachelor of Computer Science in Games Technology
                           with Honours
                           October 2009
Contents

Table of Figures .............................................................................................................................................. iii
1 Introduction .................................................................................................................................................. 1
    1.1        Thesis Objective ............................................................................................................................... 3
    1.2        Contribution to Games AI ................................................................................................................ 3
    1.3        Thesis Structure ............................................................................................................................... 4
2      Real Time Strategy Games and AI............................................................................................................ 5
    2.1 Playing a RTS Game ............................................................................................................................... 6
    2.2 Current state of RTS AI research ........................................................................................................... 8
       2.2.1 Commercial Games......................................................................................................................... 9
       2.2.2 Research Based/Open Source ...................................................................................................... 11
       2.2.3 Military Training ........................................................................................................................... 13
       2.2.4 Learning in RTSs ............................................................................................................................ 14
       2.2.5 Conclusions from the Latest Research ......................................................................................... 17
    2.3 Micromanagement and Applying AI .................................................................................................... 18
3 The Group AI System .................................................................................................................................. 23
    3.1 Relationship Between Player, AI and Game ........................................................................................ 23
    3.2 The Group AI Entity ............................................................................................................................. 24
    3.3 Tasks and Bidding Functions................................................................................................................ 27
       3.3.1 Regroup Bid .................................................................................................................................. 28
       3.3.2 Attack Bid...................................................................................................................................... 29
       3.3.3 Flee Bid ......................................................................................................................................... 29
       3.3.4 Heal Bid......................................................................................................................................... 31
       3.3.5 Player Controlled Bid .................................................................................................................... 32
       3.3.6 Balancing....................................................................................................................................... 33
4 Implementation in the Spring Game Engine .............................................................................................. 35
    4.1 Spring and PURE .................................................................................................................................. 35
    4.2 Lua scripting and widgets .................................................................................................................... 37
       4.2.1 Call-Ins and Call-Out ..................................................................................................................... 37
       4.2.2 The User Interface ........................................................................................................................ 38
       4.2.3 Lua Table....................................................................................................................................... 40
    4.3 Implementation ................................................................................................................................... 41


                                                                                                                                                                  i
       4.3.1 Task Definitions ............................................................................................................................ 41
       4.3.2 Putting It Together........................................................................................................................ 44
   4.4 Implementation of Tasks ..................................................................................................................... 47
       4.4.1 Group Formation .......................................................................................................................... 47
       4.4.2 Attack Unit .................................................................................................................................... 48
       4.4.3 Heal Unit ....................................................................................................................................... 49
       4.4.4 Flee ............................................................................................................................................... 51
       4.4.5 User Task ...................................................................................................................................... 52
6 Results ........................................................................................................................................................ 54
   6.1 Efficiency Testing ................................................................................................................................. 54
   6.2 Reduction of Micromanagement ........................................................................................................ 57
   6.3 Aesthetics ............................................................................................................................................ 59
7 Discussion and Future Work ....................................................................................................................... 60
   7.1 Reduction in Micromanagement ......................................................................................................... 60
   7.2 Optimizations ...................................................................................................................................... 60
   7.3 Behavioural Improvements ................................................................................................................. 61
   7.4 AI personalities and learning ............................................................................................................... 63
8 Conclusions ................................................................................................................................................. 65
References ..................................................................................................................................................... 66




                                                                                                                                                                   ii
Table of Figures

Figure 2.1 Player controls squads that are managed by AI ........................................................................... 19
Figure 2.2 AI for groups managed by player ................................................................................................. 20
Figure 2.3 AI Contract Net Task Distribution Scheme ................................................................................... 21
Figure 3.1 Top level diagram of the interactions between player, ............................................................... 24
Group AI and The Game World. .................................................................................................................... 24
Figure 3.2 Group AI component interactions ................................................................................................ 25
Figure 3.3 Regroup Bid in relation to distance from the group: movement speed held constant. .............. 28
Figure 3.4 Attack Bid in relation to the distance from the target: members power, target’s threat and
range held constant ....................................................................................................................................... 29
Figure 3.5 Flee Bid in relation to the bidding member’s current health: targets threat held constant. ...... 30
Figure 3.6 Heal Bid in relation to distance from a target at 50% health. ...................................................... 31
Figure 3.7 Heal Bid in relation to health of a target in range. ....................................................................... 31
Figure 3.8 Comparison of Player Control Bid to other bids. .......................................................................... 32
Figure 3.9 Regroup, Heal and Attack Bids with weights applied. .................................................................. 34
Figure 4.1 PURE Graphic User Interface with the addition of the Group AI ................................................. 39
Figure 4.2 Interactions between elements of the Group AI system ............................................................. 46
Figure 6.1 Relationship between Time Spent in AI Calculations and Approximate Number of Bids ............ 56
Figure 6.2 Relationship between Number of Clicks per minute and Elapsed Game Time without Group AI
....................................................................................................................................................................... 57
Figure 6.3 Relationship between Number of Clicks per minute and Elapsed Game Time with Group AI .... 58




                                                                                                                                                                       iii
1 Introduction

       Artificial Intelligence in computer games (Games AI) is undergoing a revolution with
computing power increasing, gamers demanding better AI and a fast growing appreciation from
the academic community of the potential for research in Games AI. It is, however, often looked
down upon in academic circles for its lack of elegance and persistent dependence on finite state
machines and rule based systems. The distinction between Games AI and academic AI stems
from fundamentally different objectives. While academic research is interested in furthering the
field, finding the best solution and publishing work; the art of Games AI lies in creating solutions
that require a minimum of computation and memory while ultimately improving the experience
of people who play the games (Baekkelund, 2006b).

       As computer power continues to rise, more elaborate and computationally complex AI
becomes feasible (Thomas, 2004). Much of the focus of improving games in recent years has
been on graphics and physics. Together these elements use high amounts of memory and
processing power.      However, the ever increasing power of graphics cards, the recent
introduction of PhisX (Nvidia, 2009) to run physics calculations on the graphics card and multi-
core processors are all contributing to an increase in the resources that can be allocated to the
AI. This combined with a push for more realistic game environments has raised the importance
of AI in games immensely. As solutions become more complex, in answer to this demand for
more complex environments, the tactical and strategic decision making systems in games are
becoming more recognized as an important field of research and development by both games
companies and academic circles (Ponsen, Spronck, Muñoz-Avila, & Aha, 2007).

       First Person Shooter (FPS) Games and Role Playing Games (RPG) have had a lot of
attention, predominantly due to the mod friendly tools and environments that are shipped with
games in these genres (Van Lent, Carpenter, McAlinden, & Tan, 2004). ‘Modding’ refers to the
modification of game content or addition of new game content by users of games after they are
released. These modifications that can be added to any copy of the original game are referred to
as ‘mods’. For FPS games and RPG these tools have become very powerful. For example, the
Unreal Tournament series comes with a map editor, UnrealEd, and allows user to access, edit

                                                                                                  1
and add to the game content which is all written in Unreal Script (Epic Games, 2008). This makes
it easy for AI researchers to plug in their AI solutions and test them under game conditions (Van
Lent, Carpenter, McAlinden, & Tan, 2004).

       Unfortunately the modding tools for RTS games tend to be less powerful and are
generally restricted to the addition and editing of maps/scenarios, campaigns (sequences of
scenarios connected by a story) and unit attributes. This has lead to slower progress in RTS
research. The development of open source RTS games such as Wargus, ORTS and Spring have
accelerated research in recent years by providing a platform for RTS AI development. Wargus is
an open source clone of WarCraft II (The Wargus Team, 2007). ORTS is an “Open Real Time
Strategy” game developed for specifically for AI research being lead by Michael Buro at the
University of Alberta (Buro, 2009). SPRING is a large open source RTS project with many groups
of contributors (The Spring Community, 2009).

       The challenge of creating an efficient and effective AI for controlling computer players in
RTS games has attracted most of the attention of RTS AI research from both commercial and
academic circles. RTS games call for AI to control many elements in an environment with
infinitely many possible states, uncertain results from actions, and imperfect information about
the world (Buro & Furtak, RTS Games and Real–Time AI Research, 2004).                Commercial
requirements add to the complication because the AI is not created to be as good as it can be; it
must be moderated to be fun to play against and operate on multiple levels of effectiveness to
represent the different difficulty levels that players have come to expect in games. The problem
of RTS computer player AI has historically been addressed by games companies by developing
brute force solutions such as scripted sequences of actions (Spronck, Ponsen, Sprinkhuizen-
Kuyper, & Postma, 2006).      More complex RTS worlds and a desire for genuinely smarter
computer players is leading big RTS game titles to utilise planning and goal driven AI systems
(Cerpa & Belleiro, 2008)and many other interesting solutions are being investigated in academic
research (e.g., (Buro & Furtak, 2004), (Hagelbäck & Johansson, 2008), (Sailer, Lancot, & Buro,
2008) and (Spronk & Ponsen, 2008)).

       Amidst all the advances in AI for computer players there has been little fundamental
change in the way humans interface with and play RTS games. As processing and graphics power
improves RTS games have become able to support more and more individual objects in a richer,


                                                                                                2
more complex world. In a modern RTS it is not uncommon to be in command of hundreds of
units and structures in a world that contains many different terrain types and features. Terrain
types can affect movement, line of site and cover and features can be things like civilian
structures that units can occupy and defend.


1.1 Thesis Objective

       Micromanagement is a term known by all RTS game players in the gaming world. It refers
to tedious need to give many low level orders to individual units or small groups in quick
succession. Micromanagement is necessary to achieve the most effective behaviour from units
when executing objectives. As the number of units in the world increases the player must
eventually choose whether to focus on micromanaging particular regions of the map or accept
the inefficient default behaviour of their units and focus on higher level strategy. Players of RTS
games have seen little improvement in this sector.

       The aim of this thesis is to investigate an AI solution to the problem of
micromanagement.      The solution revolves around coordinating small groups of units by
employing and AI that devises tasks and assigning units to a task based on bids that represent the
unit’s effectiveness and desire to do that task. The goal is to provide a system that allows a
suitable level of autonomy for the group, while still allowing the player to take control and
micromanage the situation if they decide it is necessary.


1.2 Contribution to Games AI

       The contribution of this thesis is to the field of "games AI". As with much of the current
work on games AI, the contribution comes from optimizing and adjusting current AI structures
and showing that they can operate under the tight constraints of a real-time game and
potentially improve the experience of playing games.

       This thesis is focused on adopting the concept of bidding for tasks similar to the multi-
agent system task distribution scheme known as Contract Net (Smith, 1977). It is shown that the
concept is a viable solution to provide a comfortable level of autonomy for groups of units under
the command of a player while not denying the player the option of giving low level orders. By


                                                                                                 3
creating an AI system flexible enough to allow the players to construct groups of any assortment
of units and structures and assign and remove AI control as they wish, this system overcomes
one of the major hurdles in reducing micromanagement. This hurdle arises from players not
approving of the choices the AI makes. If the player is restricted in their control of a unit and the
AI is not doing as the player would wish, this detracts from the enjoyment of playing the game. If
the player can choose to use the AI in situations where they feel the AI does an acceptable job
and is able override AI control at any time simply by giving orders directly to units, this
frustration should not arise.


1.3 Thesis Structure

       The Structure of this thesis is as follows. Chapter 2 provides and introduction to RTS
games for those that are not familiar with the genre, and covers a general description of the type
of RTS game the AI for this thesis is focused on improving. This chapter will also cover a review
of the current literature, show the lack of AI for micromanagement and draw on related RTS AI
research to build the concept of the Group AI assistant for human players. Chapter 3 will be
covering the Proposed AI system in a general form that could potentially be applied to any RTS
game. This chapter describes the theory behind the AI system and what is involved in defining
tasks for the AI. This leads into chapter 4, which will describe the successful implementation of
the system in the Spring engine (The Spring Community, 2009). This chapter will be covering the
necessary details about Spring, and Lua (Ierusalimschy, Celes, & De Figueiredo, 2008) the
scripting language used by Spring. Chapter 5 covers the results from test run on the AI system to
evaluate its efficiency and effectiveness and also assess the appearance of the behaviour of units
under the AI system. Chapter 6 is a discussion of the results and the possible future work that
needs to be explored to gage the fill potential of the AI system to improve RTS playability.
Finally, Chapter 7 summarizes the conclusions and findings drawn from the implementation of
this AI system.




                                                                                                   4
2 Real Time Strategy Games and AI

        What constitutes the first RTS game is difficult to define because the game genre evolved
through the release of many games. RTS evolved out of their turn based predecessors between
1980 and 1990, in which players took turns making moves in a similar fashion to chess. One of
the first games to display many of the features now recognized as core to the genre was Herzog
Zwei (publish by Technosoft, 1989). It is commonly agreed, however, that Brett Sperry’s Dune 2
(publish by Westwood Studios, 1992) is the game that kick-started the genre. Brett Sperry was
also responsible for the naming of the genre when he marketed his game referring to it as a
“Real Time Strategy Game”. Dune 2 established the key elements of the genre such as placing the
player in command of an army from a bird’s eye view of the battle field and gathering resources
and to building structures (Whizzer, 2002). Other major titles are accredited with shaping the
genre into what it is today. Warcraft: Orcs & Humans (publish by Blizzard Entertainment, 1994)
introduces the concept of building an entire society rather than purely military units and
structures. Command & Conquer (publish by Westwood Studios, 1995) is accredited with being
the first popular RTS to full harness a competitive multiplayer mode.         The Original Total
Annihilation (publish by Cavedog Entertainment, 1997) introduced 3D terrain and unit models
and allowed huge battles. The huge number of units and the games stream lined interface
influence many RTS games after the games release. Since its inception there have been many
games released under the general heading of RTS that deviate from the general for in many
ways.

        This chapter presents a description of the specific breed of RTS games most relevant to
this thesis. This will cover what is involved in playing them, a review of RTS IA research and the
current state of micromanagement. This thesis is focused on the application of AI to the problem
of micromanagement in these RTS games. However, the current focus in RTS AI research is in the
development of AI for computer players (AI capable of taking the place of a human player).
While there is no work specifically focused on AI for micromanagement this chapter will review
the current research in RTS AI. The aim of this review is to explore three things: the conditions
the AI must operate within, the elements of these player AIs that may be relevant to this thesis,
and the common concerns surrounding AI in RTS games. This is followed by a brief summary of
the isolated strategies that have been employed by some RTS games to alleviate the stress of

                                                                                                5
micromanagement, finally concluding with how all these considerations have lead to the AI
solution that will be detailed in chapter 3.


2.1 Playing a RTS Game

        The RTS games of primary interest to this thesis are games where the player controls an
army of units and an assortment of structures and leads them in military campaigns over the
game world. There are a number of RTS games that differ from this general structure to provide
a slightly different game experience but they will not be covered here. This section provides a
description of the major elements in an RTS game of this style, the interface that players use to
play them and then a run though of typical game play.

        Objects under the control of a player generally fall into one of two categories, “units” or
“structures”. “Units” are mobile objects that can travel by land, water and/or air and usually
have a set of abilities such as attacking, construction, gathering resources and spell casting.
“Combat unit” generally refers to those with attack capabilities or other offensive abilities while
“workers”, or “economic units”, specifies those with construction and resource gathering
abilities.   “Structures” are generally immobile and are used to produce units, research
technologies and sometimes have the abilities to attack. Unit producing structures are referred
to as “factories” and structures that can attack are known as “defensive structures” as they are
generally used to defend bases and hold critical regions of the world. “Technologies” take time
to “research” and can affect the game in many ways: allowing the construction of new units or
structures, increasing the power and effectiveness of existing units or structures, providing the
player with useful abilities, and allowing further research. All this construction, training and
research requires time and resources that the player must collect. Games differ widely on the
types of resources and the method of collection. These include: using workers and or structures
to gather gold, gas, wood, oil, etc from deposits around the map; capturing ground; completing
objectives; and more. Players oversee the battlefield and manage their forces across the world.
Usually, the player starts with an initial assortment of units and structures and musters their
strength by collecting resources, training an army, and researching technologies. Then they must
defeat all opposing teams in whatever fashion the scenario dictates. Victory conditions can




                                                                                                 6
include vanquishing your enemy, capturing and holding critical locations or artefacts, and a range
of other challenges.

       A human player typically views the world from a bird’s eye (or god’s eye) view and can
move the viewing camera around the world to manage different areas of the battle field. The
main tool for giving orders and controlling the game is the mouse. Left clicking on an object in
the world selects that object, and the available set of orders that can be given to that object are
presented as buttons that appear in a 2D panel that rests in front of the view of the world.
Sometimes these orders are assigned to hot-keys that the player can press to select them or they
can be selected by left clicking the button. For some instructions, such as move or attack, the
player needs to select a target. In these cases right clicking over a position or object in the world
selects the position or target for that order. Left-click and hold-and-drag creates a selection box:
all units and structures under this box are selected simultaneously, and orders may be given to
all of them at once.

       In a typical scenario, where the victory conditions are to destroy all opposing teams, all
players start the game with at least a worker or factory that produces workers. From these
humble beginnings the objective is to collect and stockpile the available resources, typically
those closest to your start location first. As your stockpile increases you must spend it on
creating more workers to gather resources faster and set them building structures that will start
to increase your military strength. As your base grows, you train more combat units to attack
your foes, get more workers to increase production, and built defensive structures to fend off
attackers. In these early stages, you probably only have one base with 10 to 15 structures, no
more than 30 or 40 combat units and about 15 workers. IE 55 to 70 individual objects, and you
would at this stage probably have only two fronts: defending your base and your current
offensive action. This is perfectly manageable for a moderately experienced player. However
things do not remain this simple.

       It becomes necessary for you to start occupying other regions of the map for a range of
reasons. Building more bases to gain access to more resources is critical to maintaining or
increasing your rate of production, producing more expensive units, and researching
technologies to make them more effective. Holding and defending choke points can also be
strategically beneficial by limiting where you can be attacked from or denying your enemy access


                                                                                                   7
to resources. Finally as your strength increases there is often simply not enough room at the
starting location to extend your first base and you must find a second and third site to build more
factories and other structures. On top of managing your own expansion, the game develops as
your opponents start researching high level technologies and the situation can become far more
complex and require you to open multiple offensive fronts in very different regions of the map.
For example, while you are in the middle of attacking one base you discover the enemy is
preparing to launch a “nuclear missile” in another. You cannot afford to stop attaching the first
base because it must not recover, or all your efforts will be wasted, but if you don’t deal with the
missile silo you’re doomed!


2.2 Current state of RTS AI research

       The major role of AI in RTS games is to take the place or a human player and control one
of the forces in the game world. Computer players, as they are referred to, can be included in a
game to provide entertaining single player gaming or as additional players in a multi player
scenario. As you can see developing AI capable of taking on this immensely complex task would
be a challenge. This challenge and other associated problems is the focus of most of the current
research in the field. At present there are a number of conditions in addition to intelligence that
must be considered for games AI. These include computation time, development time, testing,
reliability and the overall gain in player enjoyment.

       At present in the games field there is a strong focus on graphics and physics. As a result
the amount of CPU time devoted to AI is very restricted in most genres, so the AI must be very
efficient (Spronck, Ponsen, Sprinkhuizen-Kuyper, & Postma, 2006). This situation is particularly
critical in the RTS field as the physics and graphics of many objects on the battle field is a massive
burden that needs to be handled every frame. Simple tasks such as collision detection and
avoidance and path finding become serious problems at the scale of an RTS game and, even with
extreme optimization, effective solutions still use much of the precious available in-game
computing power. This leaves very little for the AI to work with. The problem of RTS AI has
historically been faced by games companies by simply developing brute force solutions such as
scripted sequences of actions (Spronck, Ponsen, Sprinkhuizen-Kuyper, & Postma, 2006). With
increases in computing power and modern RTSs becoming far more complicated such methods


                                                                                                    8
are becoming unsuitable and more interesting AI models are being implemented to meet the
challenge.

         There are a number of different parties contributing to the bed of RTS AI research, all
with different objectives and constraints that have lead to variations in their respective focuses.
The first and most obvious party consists of commercial games companies that develop AI for
their games. The second is the fast growing academic real-time AI research predominantly
aiming to further the field of real-time AI. Third is the military, striving to develop training games
that provide realistic training simulations for officers, and to automate tactical terrain analysis.
The motivations for each major party and the latest work in the field will be summarized using
references to a number of games and the papers that put forward the AI developed for these
games.

         While computer learning is of great interest to general AI research, there are many
factors that make learning unsuitable for games. Learning in RTS AI is starting to be explored,
but continues to remain research based. The reasons for why learning is not taking hold in RTS
games are discussed, with reference to the latest work, to show why it was not regarded to be
important for this thesis.


2.2.1 Commercial Games

         In the past there was little games AI research being done and what was done was kept
guarded by games companies. In recent years the academic and modding community have
pushed for companies to publish their research and some are now beginning to do so.              This
insight into games AI research has made a great contribution to healing the divide between
games AI and academic research. Commercial games AI is driven by the cost of implementation
and the level to which it increases the player’s enjoyment. The ‘real intelligence’ of the AI is of
minor concern so long as it provides an entertaining and absorbing environment. In the past this
has lead to commercial games relying on scripted tactical sequences that describe every action
the AI will take over the course of a battle. Typically three or four variations of AI would be
developed by domain experts (the companies AI programmer(s)). These AIs would take different
tactical paths (i.e. defensive focus, offensive focus, early attacks or late attacks). These AI
opponents are good to start with, but players quickly learn to exploit weaknesses in the AI (Buro
& Furtak, 2004) and they become less challenging and less fun.

                                                                                                    9
        Much of the focus of research in this area is on improving the AI by making it more
dynamic. This is done by employing planning algorithms and game world evaluation that can
develop strategies that respond to situations in the game. The other co-focus is to make the AI
as general as possible and loosely coupled to the core game mechanics (Cerpa, 2008) (Van Lent,
Carpenter, McAlinden, & Tan, 2004). This allows the AI to be utilized across many titles and as a
result companies can afford to develop richer more complex AI solutions. While the techniques
used in these game AIs are not new to anyone in academic AI, the challenge in applying them to
games lies in making them efficient enough to operate within the minimum desirable frame rate
of a game. Only with developments in time slicing, other optimization work and improvements
in computer hardware has RTS AI been able to break away from using scripted sequences.

        War Leaders: Clash of Nations is one of the major RTS titles that has released detailed
publication on its successful AI. It employed an AI that utilized levels of abstraction comparable
to a military chain of command (Cerpa, 2008). After an analysis of the game it was decided that
three levels were needed; units, groups/formations and finally the entire army.

Cerpa (2008) identifies four important traits of the lower level AI:

           Must respond to orders;
           Able to act autonomously in certain situations;
           Might temporarily suspend current actions in favor of higher priority actions;
           Should be notified by the game logic about events in the world.


        The group and unit AI architectures developed for this game were devised so that agents
receive orders that are interpreted into goals that the agent does its best to satisfy. Based on
game events and orders, goals are pushed onto a stack and popped when they are finished or
invalidated. This allows for the agent to keep track of past goals such as moving to a point when
it is interrupted by the need to avoid a grenade (Cerpa, 2008).

        The requirements of the high level AI (in charge of the entire army) were deemed to
differ enough to require a different architectural approach. A planning architecture was applied
that borrows elements from:

           STRIPS (STanford Research Institute Problem Solver) (Nilsson & Fikes, 1970);


                                                                                               10
          GOAP (Goal Oriented Action Planning ) (Orkin, 2004);
          HTN (Hierarchical Task Networks) (Erol, Hendler, & Nau, 1996).


       STRIPS is a planning architecture that assesses the world state and constructs a list of
operations that will transform the world state to satisfy some goal (Nilsson & Fikes, 1970). GOAP
is similar to STRIPS, but incorporates other functions such as plan reevaluation in situations
where events beyond the control of the agent invalidate the current plan (Orkin, 2004). Finally
HTN extends the planning process by introducing complex tasks. These tasks cannot be
implemented directly and need to be decomposed into simple tasks (Erol, Hendler, & Nau, 1996).
The identification of these complex tasks reduces the task search space and allows for more
elaborate plans to be feasible.

       This AI is driven by ‘motivations’. These are scripts that assess the game state and devise
complex goals that will fulfill that motivation. These complex goals are broken down into simple
goals then tasks that can be put into action by the lower two levels of the AI described earlier
(Cerpa & Belleiro, 2008).

       The structure of this AI shows many of the common trends in commercial RTS AI.
Planning does not scale well, so it is only used in the high levels of the AI (Buro & Furtak, 2004).
Lower levels are more reactive in nature because this approach displays satisfactory results and
is far more efficient. It is impossible to foresee and test every possible situation the AI may have
to deal with. Therefore, emergent behaviour is seen as a positive quality in games, as AI that
shows emergent behavior tends to handle unexpected situations with more grace (Cerpa &
Belleiro, 2008). Many researchers and games companies are stressing the importance of loose
coupling between the AI engine and the game logic (Cerpa, 2008) (Van Lent, Carpenter,
McAlinden, & Tan, 2004). If the AI engine is seen to be reusable enough it is worth while
devoting more time and effort to its development. For War Leaders: Clash of Nations a complex
architecture was developed but the time saved implementing it made it worth it (Cerpa &
Belleiro, 2008).


2.2.2 Research Based/Open Source

       Open source and research based RTS AI has much more freedom to explore more
challenging and risky solutions to the AI problem because they are not dependent on the
                                                                                                 11
commercial success of the game itself. This has lead to many interesting solutions including
genetic algorithms (Spronk & Ponsen, 2008), simulation of opponents (Sailer, Lancot, & Buro,
2008) and multi-agent potential fields (Hagelbäck & Johansson, 2008). There is no common
thread between the research in this area accept for the aim to develop effective and efficient AI
solutions for computer players.

       A multi-agent potential fields approach to opponent AI proved to be a viable alternative
to more traditional script based approaches (Hagelbäck & Johansson, 2008). Different game
objects were assigned “charges” (cliffs, enemies, friends, sheep, etc). These charges were then
used to make a number of gray-scale maps that represent the potential fields generated by the
charges. These maps are then used by units to move and attack the enemy. While Hagelback
and Johansson admit there is room for improving the way multi-agent potential fields are used,
their initial trial in the 2007 ORTS competition showed a promising 30% success rate in both the
‘tank-battle’ and ‘tactical combat’ competitions.

       Research into the development of genetic algorithms for automated high level strategy
generation has proven successful in the open source clone of WarCraft II known as Wargus
(Spronk & Ponsen, 2008) (Ponsen, Spronck, Muñoz-Avila, & Aha, 2007). In this research the
game state was represented by the currently constructed buildings, and state transitions
denoted by the additional buildings that can potentially be constructed while in that state. By
applying the genetic algorithm only to high level strategic planning and assuming that lower level
tasks will be handled by other parts of the game, a sequence of high level strategic decisions is
used to construct the genetic code. These genetic codes were then effectively assessed for their
fitness and combined under strict rules to ensure the next generation was meaningful. Together
with a chance of mutation the genetic algorithm created a variety of counter strategies when
pitted against two static pre-defined opponent AI scripts. As discussed in the articles there is no
guarantee that these strategies are entertaining to play against, nor do they adapt at run time to
the game state.

       Work exploring the possibility of opponent simulation in a simplified RTS environment
has shown that it is possible to run and evaluate simulations to allow the AI to choose counter
strategies that undermine the opponent’s strategy (Sailer, Lancot, & Buro, 2008). First a number
of strategies were defined and evaluated to find which strategies are effective against others.


                                                                                                12
Then when playing a game the AI takes a snapshot of the game state and, at some later point in
the game, takes another. The AI then runs simulations of its opponent, from the first point of
observation, using its known set of strategies. Based on a comparison of the final game state
reached by the simulations and the second point of observation the AI detect which strategy best
fits its opponents actions. Then it is simply a matter of implementing the pre-defined counter
strategy.

       Stene (2006) developed a single AI oponent for TA Spring (the RTS engine now referred to
as Spring or The Spring Project (The Spring Community, 2009)) that presented a challenge to
experienced players in all available mods of TA Spring. The mods of TA Spring significantly alter
game play style and the AI was successful in devising challenging behaviors in more than twenty
of them only failing to operate adequately in one. Stene tackled the problem of opponent AI by
providing tailored solutions to all the common aspects of an RTS game.              These included:
situational awareness, economic decisions, military decisions, path planning, threat evaluation,
combat micromanagement. Complex aspects such as economic decisions and military decisions
were further decomposed into simpler components.              Agent-like AI elements were then
developed to handle the tasks and game world evaluations involved in managing these aspects
of the opponent strategy development (Stene, 2006). While this AI proved to be a good solution
in the TA Spring environment, a lot of time was spent implementing it, and it is unclear how
reusable the structure would be in other forms of RTS games.


2.2.3 Military Training

       The military has taken an interest in the field of RTS AI for its obvious application to battle
field tactical training for officers and automated tactical terrain evaluation. Less restriction on
the budget and a strong focus on realism, correctness and explain ability has lead to a strong AI
solution for the RTS training game Full Spectrum Commander (FSC) (Van Lent, Fish, & Mancuso).

       FSC was developed by USC Institute for Creative Technology and Quicksilver Software as a
training aid for the U.S. Army (Van Lent, Fish, & Mancuso) (Van Lent, Carpenter, McAlinden, &
Tan, 2004). The emphasis on a realistic training environment means that, unlike commercial
games, the AI in FSC takes 60% of CPU cycles and runs the high level AI on a separate processing
thread. FSC is played in three stages per simulation: the planning stage, execution stage and the
after action review stage. In the planning stage the player creates a high-level battle plan that

                                                                                                   13
will be followed during the execution stage. During the execution stage the player sees the
world from a first person view from the eyes of the commander avatar. The player must evaluate
the battle progress based on communication with the AI and give “on the fly” orders if the initial
plan needs alterations.    This interface is the opposite extreme to the micromanagement
approach of other games. The aim is to provide a realistic in-the-field simulation and has lead to
the development of a very complex AI that acts in accordance with U.S. Army protocol and
interfaces with the human player (Van Lent, Fish, & Mancuso).        Other efforts are focused on
providing an AI interface for FSC that facilitates the testing of different AI architectures in the
game. The hope is that this will facilitate more AI research into AI for RTSs (Van Lent, Carpenter,
McAlinden, & Tan, 2004).


2.2.4 Learning in RTSs

       The drive for more realistic environments and believable AI has lead to many developers
adopting planning architectures to make the AI more dynamic and responsive to situations in the
game world. Learning would seem the logical next step, but there are a number of concerns that
make learning less desirable in games. While learning is making its way into some genres it is still
predominantly research based in RTS games. While it is showing much potential it is not yet solid
and reliable enough to be widely accepted by commercial games companies. The problems with
current learning algorithms and techniques is that they all fail one or more of the crucial
requirements of RTS AI; computation time, reliability, development time, ability to be tuned and
debugged and developing enjoyable strategies. The most serious difficulty is developing fun
strategies. Even when learning algorithms are shown to be capable of learning effective
strategies, researchers are still unable to make a learning algorithm that learns strategies that
enhance the fun of the game.

2.2.4.1 Problems with Learning in RTS Games
       Reinforcement Learning (RL) has been successfully applied to FPS Death Match games
(McPartlan, 2008). It is noted in the article, however, that the difficulty in implementing RL is
developing a state representation that is rich enough for the “bots” (AI controlled characters) to
learn complex behavior yet simple enough for the learning process to be quick.               In an
environment as complex as a standard RTS the difficulty in finding this state representation is



                                                                                                 14
prohibitive.   In addition to this, designing rewards and punishments requires a detailed
understanding of desirable and undesirable behaviors which may be hard to define.

       Artificial Neural Networks (ANN) have received much attention in applications to FPS
games (Baekkelund, 2006a). ANN, however, do not deal well with missing or uncertain inputs
and operate best in situations where there are fewer inputs (Baekkelund, 2006a). In an RTS
environment that is fair, where the AI operates on the same restricted information available to
human players, the AI will usually have to work with missing or uncertain values. Decisions in
RTS games are also usually based on many details of the current situation. For these reasons
ANN are also not suitable. In addition to this neuroevolution algorithms tend to be very
complex. So if it is to be applied to AI that assists the player, and the player wants to have
control over the evolution, they need to have a complete understanding of the game and the
fitness function (Cornelius, Stanley, & Miikkulainen, 2006) which would be very complex for an
RTS.

       In addition to these concerns, relating to individual learning models, complex learning
algorithms require too much computing time to be practical for in-game learning. Also of
concern is that if an undesirable behavior is discovered it is difficult to isolate the cause. Even if
the cause is found, making changes to such an AI may have unexpected or unknown results,
making it difficult to make a bug free AI. If this kind of behavior is discovered close to the release
of the game it may not be able to be fixed in time. This is one of the biggest concerns for
developers. An ongoing problem with learning in games is that it is very difficult to tune the AI so
that it is guaranteed to act in a reasonable and entertaining way after the game is released. Even
if the AI is able to learn strong strategies, these strategies may be overpowering for the player or
may simply be no fun to play against. Producing an AI that learns to be fun is far more difficult
than producing one that simply learns winning strategies. Many researches are still working on
how they will assess and incorporate entertainment values into their AI (Tan & Cheng, 2008)
(Ponsen, Spronck, Muñoz-Avila, & Aha, 2007).

2.2.4.2 Applications of Learning in RTS Games
       Learning in RTS games has been shown to be potentially very effective in lightening the
burden on developers scripting opponents (Spronk & Ponsen, 2008). The use of learning
algorithms such as genetic algorithms falls short of the requirements of commercial games


                                                                                                   15
because while they may develop effective strategies, whether those strategies are fun to play
against still needs to be tested after they are created. This strategy is still effective at creating
effective static strategies and although it requires many battles to develop effective responses it
may still be easier than developing those strategies by hand. This, however, makes the learning
AI more of a development tool than actually adding to the player experience. Often there are
simpler ways to solve the problem that are sufficient. While not displaying the finesse of more
complex solutions, simple solutions can often present a much better effort to enjoyment ratio
(Gares, 2006).

         Learning algorithms can also be used to aid in ‘game balancing’ (Spronck, Ponsen,
Sprinkhuizen-Kuyper, & Postma, 2006) (Spronck, 2006) which has historically required thousands
of hours play testing. Because of the nature of learning algorithms converging on successful
strategies, if many of these strategies involve a particular unit then it is likely that unit is unfairly
powerful or too easy to acquire.

         Preliminary studies by Tan & Cheng (2008) on their multi agent approach to learning
group tactics shows promise but has only been tested on very simple scenarios at present. Tan &
Cheng (2008) propose that a weakness in current architectures is that there has been much
research on ‘tactics’ selection (“shoot”, “attack” or “heal”) or ‘strategy’ selection (“flank” or
“defend”) but no work on a unified model. The agents in their architecture communicate
strategies up and down a command hierarchy that are translated into tactics by each agent. By
separating the learning from the action selection process it is possible to implement
reinforcement learning because while the results of selections is recorded the expensive learning
algorithms can be implemented either during times in the game when there are fewer
calculations to handle or, for RTS games, after a battle is over. By evolving ‘agent personalities’
that are a set of weights applied to the chance of selecting strategies, agents learn to adopt
strong strategies. Tan and Cheng admit that they are yet to evaluate the entertainment value of
their architecture and hope to be able to do this in the future. It is still not certain if the
architecture will operate well when presented with the complex variety of units found in most
games.




                                                                                                      16
2.2.5 Conclusions from the Latest Research

       As has been shown, almost all the work in the field is focused on producing AI for
computer players.      While computer players get smarter and smarter in response to the
appreciation that RTS games are getting more and more complex, human players are having to
play these more complex RTS games with very little improvement in their interface. Full
Spectrum Commander (FSC) is somewhat unique in its efforts to provide a good user interface to
intelligent squads for the training of officers (Van Lent, Fish, & Mancuso). The FSC project is
attempting to provide a more tactical driven gaming experience by putting complex AI at the
hands of the user. While FSC is not driven by entertainment, it is still entertaining and proves
that it is possible to have humans play at a tactical level with the aid of AI to deal with
micromanagement. The challenge of this thesis is to take a less extreme approach and still allow
micromanagement of units while they are under AI control. Also worth noting, is that most RTS
games will not be willing to devote 60% of the possessing power to AI.

       There a number of considerations that can be taken from the experience of RTS AI
research that relate to this thesis. Four important traits of the lower level AI: responding to
orders, acting autonomously in certain situations, suspending current actions in favor of higher
priority actions and being notified by the game logic about events in the world (Cerpa, 2008); are
all relevant to the AI for assisting with micromanagement, because it has a similar purpose to the
group/formation level of Cerpa's Player AI. As the AI for this thesis will operate at a squad level it
should focus on reacting to the current situation. Lower level AI that is reactive in nature
displays satisfactory results and is far more efficient (Cerpa, 2008). Emergent behaviour is seen
as a positive quality (Cerpa & Belleiro, 2008), as is loose coupling with the core game logic
(Cerpa, 2008) (Van Lent, Carpenter, McAlinden, & Tan, 2004) so both these should be
considered. While the exploration of real-time learning is a fascinating combination of
optimization and Intelligence it is yet to be applied to commercial games. This is because: it is
unpredictable, takes a long time to produce good results, and it is very hard to fix small bugs in
behavior without running the learning process again. Due to these complications learning does
not feature in this thesis but could be a focus of further study.




                                                                                                   17
2.3 Micromanagement and Applying AI

        The issue of micromanagement in RTS games has existed since the beginning of the
genre and has not improved as the complexity and scale of RTS games increases. Many RTS
games have implemented small isolated solutions to particularly frustrating and predictable
micromanagement situations that exist within the particular game. The Dawn of War series
(published by Relic Entertainment in conjunction with Games Workshop, 2006)                   reduced
micromanagement by putting the player in control of squads, instead of individual units. While
these are excellent RTS games, the squad interface is sometimes frustrating because individual
members of the squad do things the player does not want and there is no way to correct an
individual unit’s behaviour. In Age of Empires 2 (published by Microsoft, 1999) workers assigned
to constructing resource gathering buildings will automatically commence gathering nearby
resources when they are done constructing. Age of Empires 2 also introduces an excellent array
of unit formations in which players have the option to make units take formations when selected
as a group. These formations can be used to put weaker or long range units in the center or at
the back of the formation, or used to spread out units so the group takes less damage from
attacks with a wide area of effect.     Most RTS games now allow units to autonomously attack
units that come into their line of site. These are all title specific, and only serve to alleviate very
specific cases of micromanagement.




                                                                                                    18
                     Figure 2.1 Player controls squads that are managed by AI

       One of the biggest concerns restricting the development of AI assistants for human
players is that players tend to get frustrated when the AI does things that the player does not
want done. In other genres the development of the AI for units or teams that are allied with the
players has been welcomed at every improvement. This is due, in part, to the lack of control the
user has had over these aspects of the game in the past. For example Non-Playing Characters
(NPCs) in Role Playing Games (RPGs) have historically acted very autonomously so any
improvement in the NPCs AI is welcomed because it makes them better game companions. The
difficulty presented in RTS games is that the player has always had direct control over the actions
of all the units and buildings under their control and been able to micromanage to get exactly the
behaviour they want.

       As a result of the difficulties involved in making helper AI in RTS games, most developers
have fallen back on the micromanagement approach. In War Leaders: Clash of Nations (Cerpa,
2008), though the AI was developed in very good layers (high level strategy selection, group/
formation control and individual unit AI), only a restricted subset of the unit level AI behaviors
and autonomy was made available to the human player. This was due to the belief that
micromanagement would create less frustration (Cerpa, 2008). Most commercial RTS games opt
not to take the risk of implementing AI that assists the player and resort to making the player
control all the actions that their units make. The limited amount of autonomy given to units puts

                                                                                                19
more strain on the player but the understandable and predictable interpretation of orders is
believed to make for more enjoyable game play.

       In answer to these challenges the group AI proposed in this thesis is designed not to
remove the ability of players to micromanage but allow the player the option of assistance if
they want it. Players of TA Spring were impressed by the combat micromanagement system of
the opponent AI implemented by Stene (2006), so it would folow that players may enjoy having
similer AI functionality avalable to them, if it is flexible enough not to cause frustration. In this
proposal, to increase the level of control the user has over the AI, they will be responsible for
selecting groups of units and assigning an AI entity to that group. This, combined with the ability
to directly control unit action when the player wants the units to behave in a specific way, should
mean that the player suffers no frustration.




                  Figure 2.2 AI for groups managed by player

       To improve the chance of these proposed AI helpers being accepted by players they need
to satisfy a number of criteria. When the player gives the AI orders they need to have some
assurance that similar orders in similar situation will be carried out in a similar fashion. Failure to
do so would result in the player not being able to plan strategies as they are not quite sure how

                                                                                                    20
the AI will interpret the orders. In the inevitable situation that the AI is without exact orders
from the player it needs to act intelligently within a limited scope of autonomy. For instance, the
player will get frustrated if groups of units start wondering across the map simply because they
finished attacking the base and the AI decides it would be good to explore the map.

       The idea of prioritizing actions is explored in (Cerpa, 2008) and evaluating how effective
units are at completing tasks can be found in robotics literature. (Earl & D'Andrea, 2007)
successfully applied it to a problem of coordinating a group of vehicles in a game of ‘RoboFlag’
but it has not been applied to RTS games. The focus in Earl & D'Andrea’s study was the
evaluation involved in task assignment. This involved a set of tasks and an evaluation of how
effectively members can complete those tasks. Tasks are then assigned so that the overall
effectiveness is maximized. Applying this to an RTS should allow the group AI to utilize the
strengths and abilities of the members of the group and avoid their weaknesses.




                        Agent
                                                        Agent
                                                        (manager)




                       Agent

                                                    Agent




                        Figure 2.3 AI Contract Net Task Distribution Scheme

       The bidding system in the AI for this thesis has some similarities to the multi-agent task
distribution scheme known as Contract Net (CNET) (Smith, 1977). In CNET an agent broadcasts
tasks to other agents because it “feels” that it is not capable of completing the task itself. This
agent becomes the manager of that task. Other agents proceed to place bids on that task. In the
bid is all the relevant information that the manager needs to select the best agent or agents for



                                                                                                21
the task. Having established a “contract”, these agents then complete the tasks and report back
to the manager when they are done or encounter a problem.

        The main difference between the proposed AI and CNET stems from the need for
optimisation that requires a less stringent adherence to the creation of isolated agents in favour
of a centralised bidding engine that calculates all the bids based on unit type definitions, the
individuals current state and relevant details of the task. Also there are few instances in an RTS
where it will be undesirable to assign as many units as possible to a single task so if there is only
one task all members of the group capable of taking part should be assigned to that task. This
means that in the proposed AI, the responsibility for choosing which task to be assigned to can
be seen as falling on bidder rather than the task creator. Members of the group will be assigned
to the task they currently have bid most highly for, regardless of whether there is a higher bid on
that task from another unit. Also, due to the dynamic nature of the RTS environment, bids will
frequently have to be recalculated to take into account the changes, and members of the group
re-assigned accordingly. While it can be conceptually nice to think of the units in the AI for this
thesis as agents, it was developed in isolation from CNET and does not follow many of the rules
of this existing structure.




                                                                                                  22
3 The Group AI System

       The proposed group AI system revolves around tasks and bidding. Tasks will be created
by the AI in response to user orders to groups and specific game events and added to a list of
current tasks within each group AI entity. Units in the group will have bids calculated for each
task. Units will then be assigned to the tasks with their highest bid and take action to complete
this task. The key is that if the user chooses to bypass the group command interface and gives
orders to units directly these orders are executed as a top priority and only when these
instructions have been completed is control of the units handed back to the AI. This will be
achieved by responding to the event of the player giving orders and creating an under user
control task that overrides any other task. This will allow the user the choice of micromanaging
or having the units controlled by the AI. These AI entities would receive high-level orders from
the user and handle the details of executing that order. The human players could then give
orders such as defend ground, move to location, etc and have The AI achieve this objective. This
AI architecture is flexible enough to have any combination of unit types and any number of
individuals and will help the human player to focus on higher level strategy.

       This chapter describes the proposed AI system. It begins by establishing the intended
relationship between player, AI and the game. Then the interactions between the internal
components that make up each AI entity are covered. This leads to an explanation of how this
system facilitates cooperative dual control of members of the group by both player and AI.
Finally it will provide a detailed look at some example ‘tasks’ as used by the AI and how their
bidding functions can be used to achieve the desired behaviour.


3.1 Relationship Between Player, AI and Game

       The human player is responsible for the creation modification and removal of group AI
entities. This is achieved by providing a user interface where the user selects a group of units
and assigns a group AI entity to take control of those units. Removing a group AI entity places all
the units back under the sole control of the player. This gives the player the power to use the AI
as much or as little as they choose. As all players are different, they may decide the AI is good for
handling some situations while preferring to manage others situations themselves.


                                                                                                  23
                        Player




                                                                   The Game
                                                                   World




                  Group AI




                 Figure 3.1 Top level diagram of the interactions between player,
                 Group AI and The Game World.

       The user gives orders to the group through the user interface. They may also give direct
orders to units in the world in the traditional fashion. Orders to the group are interpreted into
tasks that the members of the group bid on. These bids are a representation of each member’s
ability and desire to complete each task. Once members are assigned by the AI to the task they
bid most highly for, the group AI issues the appropriate orders to all the members of the group in
the game world. The game notifies the group AI of event such as deaths, enemy in line of sight,
member taking damage, member idle, etc. These events are interpreted into new tasks or, if the
event was relevant to an existing task, that task is updated.


3.2 The Group AI Entity

       The internal workings of the AI can be conceptualized as six functional elements, two data
elements and an update cycle. The functional elements are the event handler, task creator, task
updater, bidding handler, task re-assignment and issuing orders to units. The data elements are
the list of all current tasks and the list of all members of the group and their relevant
information. The update cycle simply manages the continual cycle of update task information,
calculate bids, reassign members and give orders.

                                                                                               24
       Player orders and game world event notifications are received as interrupts by the event
handler which must compare the details of the interrupt with the current tasks to see if the
event or order should be sent to the ‘Create Task’ or ‘Update Task’ components. For example, in
the event that a member of the group takes damage, the first occurrence would create a heal
unit tasks but each occurrence thereafter would simply update the existing task. An example of
a player order is moving the group to a new location: this order could create or update the move
to location task.


                         Player



                                                                          The
                                                                          Game
                                                                          World




                    Group AI

                          Event Handler


                                   Create Task       Memb
                                                     er
                                                     Details
                               Tasks
                                                          Give orders
                                                          to units
                                             Re-assign
                                       Bid
                     Update Task
                                                 Update Cycle
                     Information




                    Figure 3.2 Group AI component interactions

       The ‘Create Task’ component interprets events and orders into tasks, compiles the list of
group members that are capable of doing the task and initialized the task details based on the


                                                                                             25
information in the event (e.g., the target or position relevant to the task). The list of members
that can do the task is compiled by comparing the abilities of each member with those needed
for the task. At the start of the next update cycle the ‘Update’ component use the definitions
inside the tasks to update the task details. ‘Update’ is responsible for checking relevant details in
the game world and updating these in the task. If any of this leads to the task being deemed
complete or invalid then that task is removed from the list and all members currently assigned
are freed from that task. When the task is created, and as details in the task are updated, the
bids for each member that can do the task need to be calculated and recalculated accordingly so
each updating is followed by bid calculation.

       Finally, the ‘Re-assign’ component assigns each member to the task with their highest bid
and ‘Give Orders to Units’ iterates though each task and issues orders, to the members currently
assigned, in accordance with the description in the task.           The ‘Update Cycle’ runs these
components as often as possible and keeps everything up to date with the current state of the
game. Meanwhile, event notifications and player orders continually add and update tasks to the
current list held in the group.

       All tasks and their relevant details are stored in a list of task objects within the group.
Relevant details include:

              Task type;

              A definition of the abilities needed to complete this task;

              List of members that have all necessary abilities;

              Current bids for all those members;

              List of members currently assigned to this task;

              Bidding function: the function that calculated a bid based on all the relevant
               element on the member’s status and task’s status;

              Handling function: instructions for members assigned to this task;

              Update function: method of updating and checking whether it’s valid or finished;



                                                                                                  26
              Other task specific information (such as target game object or location).

       The behaviour of the AI is very much dependent on the definition of these tasks objects.
This is because, while it is convenient to explain the AI by referring to the update, bid, reassign
and give orders components these are little more than loops for each task that call the relevant
function in the task objects. For example the update loop would be as follows:

       For each task T do
               T.update()
       end For


       Within this structure it is simple to facilitate a dual control of members by both the player
and the AI. This is achieved by interpreting the event of a user issuing orders to units into the
creation of a special task that represents that the unit is under instruction from the player. This
task has an inflated weighting in its bidding function but the only member placed in the list of
members that can do this task is the one that was issued the order by the player. As a result this
member, and only this member, will immediately be assigned to this task. While assigned to this
task, the task specifies that this unit should not be given any orders. So they will follow the
orders of the player as normal. This task is deemed finished when the unit completes all orders
from the player and is removed from the list of tasks. As a result the unit can now be assigned to
the other normal tasks.


3.3 Tasks and Bidding Functions

       The list of tasks that may be included in the AI would differ for each RTS environment as
they all contain unique elements that set them apart from other games in the genre. It is
possible, however, to construct a core list that would be shared across most, if not all, RTS
games. These include tasks such as regroup, attack unit, heal unit and flee. The following
section will show how the bidding functions for these tasks can be established so that units
behave in an intelligent coordinated fashion.




                                                                                                 27
3.3.1 Regroup Bid

                If member is within the bounds of the group then:

                       Regroup = 0

                Else

                       Regroup = (Distance from boundary)2 / (members movement speed)



                                                 Regroup Bid
            100
             90
             80
    Bid Value




             70
             60
             50
             40
                                                                                          Regroup Bid
             30
             20
             10
              0
                   0        5        10         15         20          25   30   35
                                          Distance from group center

Figure 3.3 Regroup Bid in relation to distance from the group: movement speed held constant.


                The groups boundaries should be based on the group’s position and large enough to
comfortably hold the entire group. Distance refers to the distance between group boundaries
and member location. When inside the boundaries of the group this would be 0. The bid for the
task increases quadratically as the unit stray further from the group, to represent that members
should not go chasing after tasks that are too far away from the location of the group. To make
this task the dominant tasks all other tasks should have generally linear bid functions. Members
with a greater movement speed have less fear of moving away from the group so this is
represented by dividing by movement speed.




                                                                                                   28
3.3.2 Attack Bid

                Attack = (members power) x (Target’s threat) x (1.5 if target is in range)     or

                Attack = (members power) x (Target’s threat) / ( (distance beyond range) / (range) )


                                                        Attack Bid
                45
                40
                35
                30
    Bid Value




                25
                20
                15                                                                                  Attack Bid
                10
                 5
                 0
                       0        5        10        15        20       25       30       35
                                               Distance from target

 Figure 3.4 Attack Bid in relation to the distance from the target: members power, target’s threat
 and range held constant

                The members attack power is a calculation that derives the amount of damage the
member can do to the target. This value is a multiplier because it is desirable to shoot at the
target the unit will most damage. The threat of the target could be the amount of damage they
are expected to cause, as this increases the target becomes a higher priority so this also
increases the bid. It is generally better to shoot things that are closer to you because they are
most probably currently shooting you or will be soon, and time spent moving instead of attacking
is a waist. To reflect this, the bid is reduced as the target gets further out of range. To further
encourage units to shoot targets they need not move to get to, if the target is in range the bid
can be multiplied by 1.5.


3.3.3 Flee Bid

                If member is beyond 1.2 of the targets range

                           Flee = 0

                Else

                           Flee = (target’s threat to member) x (health lost) / (max health)


                                                                                                                 29
                                Flee Bid (when target poses threat)
                6

                5

                4
    Bid Value




                3

                2                                                             Flee Bid (when target poses
                                                                              threat)
                1

                0
                    30     25       20        15        10         5   0
                                Current Health (max health = 30)


Figure 3.5 Flee Bid in relation to the bidding member’s current health: targets threat held constant.



                If the target is further than their maximum range from the member then there is no need
to flee so the bid should be 0. To allow some margin it would probably be wise to consider
fleeing slightly before the member is in range, hence the 1.2 x target’s range instead of 1. The
threat to the member will usually be measured in the amount of damage the target can do to the
member. This damage becomes more of a concern as the members health is reduced so it
makes sense to increase the bid as this decreases. Health lost divided by the members maximum
health will increase from 0 to 1 as the member looses health.




                                                                                                            30
3.3.4 Heal Bid

                   Heal = (1.5 if target is in range) * (target’s health lost) / (target’s max health) or

                   Heal = (target’s max health) / (current health) - (distance beyond range) /
                             (distance factor)


                               Heal Bid v Distance to target (health 50%)
                  0.9
                  0.8
                  0.7
                  0.6
      Bid Value




                  0.5
                  0.4
                  0.3
                                                                                                      Heal Bid v Distance to
                  0.2                                                                                 target (health 50%)
                  0.1
                    0
                         0        5       10         15       20         25       30       35
                                                 Distance from target


Figure 3.6 Heal Bid in relation to distance from a target at 50% health.



                                      Heal Bid (when target is in range)
                  1.6
                  1.4
                  1.2
                   1
   Bid Value




                  0.8
                  0.6                                                                           Heal Bid (when target is in
                  0.4                                                                           range)
                  0.2
                   0
                        30       25       20         15         10            5        0
                                      Targets Health (max health = 30)


Figure 3.7 Heal Bid in relation to health of a target in range.



                   Heal is an ability that worker units and some structures have. It represents the ability to
replenish the health or “hit-points” of other units and structures. In this task the target refers to
the unit that is in need of healing. This task becomes more regent as the targets health dwindles,

                                                                                                                              31
hence the same ‘(targets health lost) / (targets max health)’ from the flee task is used. The
healer must be within a minimum range of the target to perform the ability. As with the attack
bid, it is better to heal the units that are within range first and then those closest next, as moving
takes time. To achieve this, the bid is reduces as the distance beyond range increase. By dividing
this distance by a constant, the distance where the bid becomes less that 0 can be controlled. At
this distance, taking heal tasks will not be considered.


3.3.5 Player Controlled Bid

           Player Controlled = ∞ or (a very high value)


          120
                                                                  Regroup Bid
          100

                                                                  Attack Bid
           80

                                                                  Flee Bid (when target
           60
                                                                  poses threat)

           40                                                     Heal Bid (when target
                                                                  is in range)

           20                                                     Heal Bid v Distance to
                                                                  target (health 50%)
            0                                                     Player Controlling
                0     10        20        30        40
          -20

     Figure 3.8 Comparison of Player Control Bid to other bids.



          To make player orders unquestionable then the bid for this task simply needs to be a
value higher than that which any other bid can achieve. If however it is desirable that units
ignore player orders in some very urgent situations with very high bids then the bid of the player
control task can be set to a high value that lies just below the highest expected bids of these
tasks. To clarify this, if a more realistic behaviour is desired where units refuse to charge into
battle when they have very low “health”, the player’s control bid should be set just lower than
the maximum bid for fleeing. A complication is that the player controlled task needs to
remember the orders that the player gave so that when the unit is back under the player
controlled task these orders can be reissued to the unit so it will carry out the player’s original
wishes.

                                                                                                   32
3.3.6 Balancing

        As defined so far, the bids for regroup, move, attack, flee, heal and player tasks seem
intuitively reasonable for each task in isolation, but the system would not display the desired
behaviour when these bids combined with each other. The bid for the flee task would almost
never be higher than for the attack task so units would never be assigned to it. In order to
achieve the desired behaviour a weighting factor is applied to the task bid. i.e.,

        Bid = (Bid formula) x (weighting factor)

        By adjusting the weights so that bids fall into a suitably chosen range of values, the
desired behaviour can be achieved. If we take the move and attack bids in this example to be the
base line and assign them a weighting of 1 then the other bids need more weight to bring them
into the range that will facilitate the right behaviour. Units should attack more than they flee, so
the flee bid should be weighted to the point where the unit will flee rather than attack only when
it is severely low on health. A weight of about 10 achieves this.

        The heal task would otherwise be very low so will need a large weighting to bring it up.
When considering how high to make the bid it is important to be aware of the fact that in most
RTS games units with the ability to heal are typically weak and have little or no combat ability.
Because of this the attack and heal tasks do not need to be weighted relative to one another.
Because healers tend to be weak they should flee from battle rather than die trying to heal
members of the group, and then heal members after the danger is past. To represent this, the
heal bid should be lower than the flee bid most of the time. A weighting of about 12 achieves
this.

        The behaviour that should be observed from the combination of these weighted tasks is
as follows. No unit should stray more than a distance of 25 from the group centre because the
move bid goes beyond 60 which outweighs all other possible bids. Combat units should attack all
the enemies in their range before attacking those outside their range and only flee when their
health drops below 7 or 8 out of 30. If healers take some damage they will flee and healers will
not take heal tasks where the target to be healed is in combat unless the target is very low in
health. After losing more than one third of their health, healers will flee and not take any heal
tasks in range of the enemy.


                                                                                                 33
                           Regroup Bid                                                          Attack Bid
             100                                                                50
              80                                                                40
 Bid Value




                                                                   Bid Value
              60
                                                                                30
              40
                                                        Regroup                                                                 Attack
              20                                                                20
                                                        Bid                                                                     Bid
               0                                                                10
                   0        10        20        30                               0
              Dist. to group location (group radius = 10)                             0          10         20         30
                                                                                     Distance from target (attack range = 10)


                        Weighted Heal Bid                                                Weighted Flee Bid
                         (target in range)                                             (target poses threat)
              20                                                                60
                                                                                50
                                                                    Bid Value



              15
  Bid Value




                                                                                40                                         Weighted
                                                                                30
              10                                       Weighted                                                            Flee Bid…
                                                                                20
               5                                       Heal Bid…                10
                                                                                 0
               0
                                                                                    30       20       10         0
                   30       20       10        0                                Distance from group location (group radious
                    Targets Health (max health = 30)                                              = 10)

Figure 3.9 Regroup, Heal and Attack Bids with weights applied.




                                                                                                                                    34
4 Implementation in the Spring Game Engine

       To test an implementation of this Group AI system it was decided to modify an open
source RTS game. This was for two reasons: developing even a simple RTS game from the ground
up would detract too much time from the development of the AI and, by using an existing RTS it
shows that there are no special requirements for the AI to work and that it is more likely that the
AI is suitable for general RTS games. From the available open source RTS engines the Spring
Project (The Spring Community, 2009) stood out for many reasons: its popularity, online
development documentation, completeness and mod friendly implementation. By far the most
important factor was that Spring is specially tailored to facilitate the development of scripts that
players can add to the game to assist players in a wide variety of ways. These scripts are written
in Lua (Ierusalimschy, Celes, & De Figueiredo, 2008), a common scripting language, and referred
to as widgets or helper AIs.

       This chapter covers the details of implementing the AI system as a Spring Widget and the
issues encountered along the way. First a short description of the Spring engine is provided, then
an outline of Spring Mods. While there are many RTS games that run on the Spring engine, only
one was selected for development and testing of the AI. Purify Unify Reclaim Exterminate (PURE,
pronounced as pure) is the game that was selected. This RTS game is described, as are the
reasons why it was selected. Then there is a brief description of Lua Widgets and how they
interface with Spring. Then the structure of the implemented AI is described with emphasis on
the elements that differ from the general structure described in the previous chapter. Slight
adaptations and additions were made in the interest of optimization, as PURE has no limit on
unit production, so the game can potentially get very large. Finally the implementation of tasks
and the set of tasks that were successfully implemented are covered.


4.1 Spring and PURE

       The Spring project has a large amount of support from many groups and individuals who
all contribute to its success. The Spring engine is based on the Original Total Annihilation (OTA)
(published by Cavedog Entertainment, 1997) but is not itself playable. To play an RTS on the
Spring engine you must download and add one of many Spring Mods. Mods in Spring define


                                                                                                 35
everything from unit type definitions, the unit models, game rules and victory conditions, to the
user interface and more. Though Spring does limit these mods to conform to some aspects of
OTA, mods can be made that display vastly different RTS games. This makes mods in the context
of Spring slightly different from the conventional mod: instead of modifying the game, Spring
mods are the game. So when playing Spring the player selects a map and then also selects which
mod to play in that map.

       PURE is one of these mods. This mod was selected to test the Group AI for three reasons.
After testing a number of the mods PURE was the most complete, bug free, and did not require
OTA files. As Spring and all its mods are open source, mods are often in varying stages of
development or reworking. PURE, at the time of commencing this thesis, was a fully functional
RTS game that contained all the elements needed to test the AI. Many of the other mods tested
contained bugs that would cause the game to crash or stop working as it should. No bugs that
caused PURE to crash were found, and almost all the units behaved as they should. Some small
bugs were found: units occasionally get stuck exiting from factories and units that become
stationary when attacking refuse move orders after they start attacking. These bugs did not
affect the AI significantly, so were deemed acceptable. The third advantage of PURE is that it
does not require any OTA content. As many of the Spring mods are based on OTA, and 3D model
creation and animation is a very time consuming and difficult task, these mods import the
models and animations from OTA. This requires you to have OTA installed or at least have the
necessary files on your computer to install and run these mods. PURE did not have any such
requirements.

       PURE is a basic RTS that follows all the normal patterns of a military domination RTS
game. PURE, like most of the Spring mods, conforms to the general OTA resources: electricity
and metal.      Metal is extracted from deposits in the ground by structures called ‘Metal
Extractors’. Electricity is generated by ‘power plants’ that work anywhere on the map. Players
start the game with a commander and an initial stockpile of resources. The commander is a unit
that has formidable combat capabilities, has the ability to construct basic structures, can repair
units and structures and can assist factories to increase their rate of unit production. With this
commander you construct your first few power plants and metal extractors, using the initial
resources.



                                                                                               36
       Now that you have a flow of metal and electricity coming in you can build a factory. The
factory can construct many different types of land based units. Initially you build workers and
cheap military units. Workers have a larger selection of structures they can build, can also repair
structures and other units and assist factories, but conform to the stranded worker definition
and have no combat capability. These workers are used to construct more resource structures,
fortify the base by constructing defensive structures, and increase unit production by assisting
the factory. As the amount of resources you collect increases you can start to construct
progressively more powerful and expensive units and structures. After this stage there are many
options you can take. Building an air field allows you to construct bombers, fighters and
transport ships. Immobile construction towers can be used to assist factories and repair and
protect base defenses. All the while you must be fending off attacks and trying to gain the upper
hand to defeat your enemies.


4.2 Lua scripting and widgets

       A relatively recent development in the Spring engine is the use of Lua and widgets to
allow programmers to add additional functionality to the game.            These were historically
programmed in C++ and were difficult to update and maintain, with frequent new versions of the
Spring engine causing them to become faulty. The Lua widgets come with two main advantages,
the interface with the game can be maintained over successive versions, and, because they are
interpreted by the engine at run time, there is no need to recompile the whole game to add or
update the widgets.

       This section will establish how widgets communicate with the game engine and how
event notifications work. This is followed by a description of the basic user interface that was
created to manage the creation and removal of group AIs during the game. Finally the Lua data
structure known as a table will be discussed and how it affected the implementation of the
Group AI.


4.2.1 Call-Ins and Call-Out

       Widgets interface with the game engine though a set of call-ins and call-out. Call-ins refer
to code in the widget that as called by the engine. For example ‘UnitEnteredLos’ is a predefined


                                                                                                37
call-in that is run whenever a unit enters the players line of site. So if this function is defined in a
widget:

       function widget:UnitEnteredLos(unitID, unitTeam)
               unitsSited++
       end


then the variable ‘unitsSited’ would be incremented every time a unit from another player
entered the line of site of one of your units. Call-outs refer to functions implemented in the
engine that can be called from the widget. One of the most important call-out for the AI is

       Spring.GiveOrderToUnit(ID,command,commandParameters,Tags)


       This call-out tells the engine to issue the designated ‘command’ to the object in the game
with the id ‘ID’. ‘commandParameters’ holds all the details for the command and ‘Tags’ specifies
whether the alt, ctrl, shift or meta keys were pressed. With the provided set of call-ins and call-
outs the widgets can be notified of game events, query the game for information and give orders
to units and structures.

       Widgets are restricted in the events they are notified about, and the information they can
obtain from the game. These restrictions are the same as the restrictions faced by the human
player. For example, a widget is not allowed to get the health or position of enemy units unless
they are currently in the line of sight of an allied units or structures. This is to ensure that people
can’t create widgets that can cheat. While this is limiting, it also has one significant advantage:
because the widgets interact with the game in the same way as the player does they can operate
completely on the client side of multi-player games. This means that a player can choose to run
a widget even if other players in the game do not have this widget.


4.2.2 The User Interface

       As widgets in Spring are used to add extra functionality, they have the ability to add
elements to the user interface. This was used in the implementation of the AI to provide a
simple GUI for the user to manage groups. A new command panel appears when a player
activates the Group AI, that initially contains 2 buttons. “Add Group” creates a group by
assigning an AI entity to be in charge of all the units that are selected when the button is clicked.

                                                                                                     38
When a group is created a button is created for it that is added to a stack at the bottom of the
new command panel. In the diagram below, there are two buttons down the bottom of the
“Group AI Panel”, indicating the player has created two groups and assigned a separate AI to
each of them. These buttons are used to select a group, so the player can give orders to that
group. When one of these groups is selected the player can chose to remove it by clicking on the
“Remove Group” button, which simply destroys the AI entity so all the units that were in that
group now revert to their default behaviour.



   Mini Map




                                        View of the Game World             Group AI Panel

                                                                              Add          Remove
                                                                             Group          Group
     Original Command
           Panel




                                                                                     Group 1

                                                                                     Group 0




Figure 4.1 PURE Graphic User Interface with the addition of the Group AI




                                                                                                    39
4.2.3 Lua Table

       Before going into tables, we must first establish what a variable is in Lua script. Any
variable in Lua actually contains two parts: a type, and a value. A variable’s type is defined by the
data that is assigned to it and is not constant: a number variable converts to a string, if a string is
assigned to it. The five basic types that a Lua variable can be are: number, string, boolean, nil
and function. Number, string and boolean are nothing special and nil represents the absence of
data. The interesting one is that functions are a basic variable type. This means that Lua can be
used to do things like:

       function add(a,b)
                 return a + b
       end
       ...
       X = add
       Y = X(10,20)


This Lua code snippet would result in Y taking the value 30.

       Tables are the only data structure available in Lua, but they are incredibly flexible. Fields
or cells in a table can be indexed by numbers:

       Xtable[3] = “cat”


or by Strings;

       Xtable[“dog”] = 5


       And the values stored in tables can be strings, numbers, Booleans, functions or other
tables. This ability to dynamically assign functions to tables indexed by strings was used in the AI
to create tasks as a table that contained all the relevant information for that task and all the
functions related to that task as well. The implications of this will become clear in the next
section.




                                                                                                    40
4.3 Implementation

       This section describes the actual system that was successfully implemented in the Spring
engine and tested on the PURE mod. The system relies heavily on the definition and creation of
tasks so we will start with a description of how a task definition is created. This leads onto a
description of how instances of a task are created and how the system uses these tasks to
control members of the group assigned to the AI. This section focuses on tasks and task
definitions without any specific task type in mind. The specifics of the tasks that were defined
for the group AI in PURE will be covered in the next section.


4.3.1 Task Definitions

       While Lua is a procedural language, tables can be used to follow object oriented
methodologies. In an object oriented sense a task definition can be thought of as a class that
defines a task object. All task definitions contain the same functions: bid, update, handle, pre-
calculate and create functions. Pre-calculate is a function that does not feature in the theoretical
model. It was included for runtime optimisation and will be discussed later. For those who like
object oriented terminology, the create function is best thought of as a constructor. Following
this perspective, the different task types can be seen as inheriting from the same abstract base
class of ‘task’ and used by other elements of the AI system without needing to know what the
specific type the task is, or how it is implemented. Each element of the task definition is stored
in a table and used by the create function to build task tables that contain all the data and
functions needed by other sections of the AI system.

       So a generic description of a task in the list of tasks for a group looks like this:

TaskTable[taskID] = {
       TaskType = (task type name, for debug Only),
       unitsCurrentlyAssighned = {ID = true, ID = true, ...},
       unitsThatCanDo = {ID = current bid, ID = current bid, ...},
       handelingFunction = (the function that handles units
                       assigned to this task),
       UpdateFunction = (the function that updates the tasks
                       Parameters and checks if task is finished or
                       unachievable),

                                                                                                 41
       bidFunction = (the function that calculates bids for this
                       task),
       taskParameters = {
               The content of this table differs for different tasks
               and situations.            This table is passed to the
               handelingFunction and bidFunction along with the unit
               ID.     It is filled when the task is created and updated
               by the update function
               },
       }

4.3.1.1 Pre-calculations Function
       When creating the bid functions, it was discovered that a high proportion of the bid
calculation was not related to the individual units in the game. All the elements of the equation
relating to information in the unit type and the weighting factor could be calculated once, when
the AI is initialised, and stored in reference table.       These include calculations relating to
maximum health, attack strength, move speed and more. The only things that differ between
units of the same type during a game are their position and current health. Therefore, a bidding
equation such as this, for the attack task:

       Bid = (member’s power) x (target’s threat) / ( (distance beyond range) / (range) )
             x (weighting)

can be split into a 2D table of pre-calculated values, called the Attack Pre-Calculations Table
(AttackPCT) indexed by the two unit types.

       AttackPCT[attacker unit type (A)][target unit type (B)] = (A’s power) x (B’s threat)
             x (A’s attack range) x (weighting)

This table can now be used at run time in the bidding function like this:

       Bid = APCT*member’s unit type+*target’s unit type]/ (distance beyond range)

A separate pre-calculations table was defined for each task – Attack, Move, Flee and Heal.
Bidding is one of the most time consuming and computationally complex elements of the AI so
this optimisation drastically increased the efficiency of the AI.


                                                                                               42
        The pre-calculation tables (PCTs) were also used to check which units were capable of
participating in a task. If the PCT contains the Boolean value false, then units of that type are not
able to do this task. A simple example of this is the move task; if a unit type is unable to move
(e.g. a structure), then the PCT would contain a false for that unit type so they do not need to be
considered for bidding on this task. This becomes more advantageous as tasks get more complex
and require a combination of checks to see if a unit can participate in a task.

        The PCTs are all calculated in a generic loop in the widget initialisation.

4.3.1.2 Bid Function
        Each tasks contains a list of the members of the group that are capable of doing this task.
The bid function runs through each of these members and calculates their bid, based on the
member’s current state, the pre-calculated values for their unit type and the details of the task.
These bids are then stored with the list of members that can do this task. These bids are then
compared in the reassignment stage of the update cycle.

        In principle, the bidding function needs to be called whenever the details of the task
change or when the state of members in the group changes in a way that will affect their bid.
Changes to task details include things like the target moving or losing health. When this
happens, all the members listed in the task should have their bids recalculated. Members of the
group change state so frequently that it was more computationally complex to keep track of
changes in member’s states than it was to simply regularly call the rebid function on all
members. This produced satisfactory results with much less computation and less confusing
code.

        The bid functions for all task types are called in a loop that constitutes the ‘second stage’
of the in the update cycle.

4.3.1.3 Handling Function
        The handling function within each task loops though each of the units currently assigned
to the task and assigns them the appropriate orders. In this implementation in Spring these
handling functions were very simple. For example the handling function for the attack unit task
ordered each of the members assigned to this task to attack the target. This was complicated
slightly by the need to check if the order was valid. Because of the very dynamic nature of the


                                                                                                  43
game it cannot be guaranteed that the target does not die or go out of sight between the time of
the update and the handling function being called.

       The handling functions for all task are called in a loop that constitutes the ‘last stage’ of
the in the update cycle.

4.3.1.4 Update Function
       The update function calls functions in the Spring engine to update the task parameters
and check if the task is done or invalid. If the task is finished or invalidated then the update
function returns less than one and the update cycle removes it from the list of tasks stored in the
group. All members of the group that were assigned to that task are then reassigned to another
task. The update function is called for all tasks at the start of the update cycle, but can also be
called as a result of event notification. As a result, the update function must, as a minimum, take
the tasks parameters as its first argument: but it can have any number of other optional
arguments that are used when the function is called from event notification. An example of this
is the group formation task that is updated when the player right clicks on the map while they
have the group selected. So the group formations update function takes additional optional x, y,
and z arguments that are used when the update is called from the mouse click event. Because
the additional arguments are optional the update function can still be called in a generic context
in the loop at the start of the update cycle.

4.3.1.5 Create Function
       The create function is responsible for creating the table that represents a task and adding
it to the list of tasks in the group. This involves: assigning all the functions, constructing the list
of members of the group that are capable of taking this task and initialising the first values for
the task parameters. The create function defined for each task copies all the function pointers in
the task definition table into the task. These create functions are called when the AI receive
certain types of game event notifications. For example the unit enters line of sight event creates
an attack unit task.


4.3.2 Putting It Together

       The interactions between all the elements of the system are shown using colour coded
arrows in the diagram below. Some trivial interaction between components and the User


                                                                                                    44
Interface (UI) were left out because they do not affect the AI functionality and only served to
complicate the diagram. Also, due to the fact that all tasks are implemented with the same
functional components the diagram only shows one. In realty there would be many task
definitions, each with its own set of interactions.

       Initialise Widget is run when the widget is activated and performs all of the pre-
calculations. The UI creates and destroys group AI entities, is responsible for drawing and
updating the graphic user interface, and handling button functions such as group selection. The
game event handlers are actually a collection of completely isolated call-ins that are run in
response to events in the game. These functions call create and update functions of relevant
tasks in the associated group. The mouse event handler is a similar set of event notification
functions but is represented as a separate component because these functions interact with the
UI. These event notification functions are how the AI creates tasks in response to game events
and user orders.

       When a group is formed a default set of tasks is created that the group starts with. In this
implementation this only consisted of the group formation task. Once the group is formed, tasks
start being generated and added to the group. The update cycle is called to update all these
tasks. The update cycle is called by the engine as often as it can be. During testing this was
typically 2 or 3 times a frame, but to reduce needles computation the update function was
adjusted to only run once per frame. To further reduce the load of updating, only one group was
updated per frame, so each group has their turn over successive frames.

       The update and bidding functions are called for each task in the list. Update uses the
information in the task and updates it to keep it in-line with the current state of the game. The
update function is also responsible for deciding if the task is still valid and checks if the task
parameters have changed enough since the last bid to need a rebid. If update returns less that 1
the task is removed. The bidding function is called after each update, if it is necessary, and uses
the task information to calculate bids and then stores those bids in the task. Then at the end of
the update cycle members are assigned to the task with their highest bid at the time and the
appropriate orders are issued to them by calling the handling function for each task.




                                                                                                45
    Initialise
     Widget

                                                              Task Definitions
                                                Pre-calculations                  Pre-calculations
                    Game event                                                         Tables
                     Handlers
                                                    Create Task


                       Give orders to
                           units
     Spring                                    Handle
   (with Mod)                                                      Update Task
                                                                                         Bid for Task
                       Mouse click
                        Handler




   Draw UI                   Update

                                                Group Info


             UI
                                                             Current Tasks
                                           Unit Groups




                                                                                 Calls

                                                                                 Creates

                                                                                 Is Used By

                                                                                 Updates

                                                                                 Functional
                                                                                 Component

                                                                                 Data
                                                                                 Component



Figure 4.2 Interactions between elements of the Group AI system


                                                                                                        46
4.4 Implementation of Tasks

        This section will describe the definitions of the tasks that were implemented in the Group
AI for PURE. Five tasks were implemented: Flee, Heal, Attack, Group Formation and the task
representing the Player’s Orders.


4.4.1 Group Formation

        Group Formation is the task that keeps the group together and near the location that the
player selects for the group. This task is effectively the default task, it is never complete or
invalid and is created when the group is first formed. When the player selects a group of units
and assigns an AI to them, the location of the group is calculated as the “centre of mass” for the
group by taking the average position of all the units in the group.

        The group’s position can be updated by selecting the group and right clicking in the map.
This results in the task parameters being updated to contain the new group location. If the player
holds ‘shift’ while clicking on the map, then each successive position is added to a list. The group
moves to the first position in the list and waits for all members of the group to be within a
minimum distance of that position before popping it of the list. This results in all members of the
group moving to the next location. When there are no more locations in the list, the group stays
at the final location.

        The group’s boundaries were defined as circle centred on the group’s location. The
radius of this circle depends on the number of units in the group. The bidding function for this
task, when the unit is beyond the boundaries of the group, is as follows:

        Bid = ((distance between unit and group location) – (the groups radius))2 * weighting

        The weighting applied was 0.02. When the unit is within the boundaries of the group the
move bid is set to 1. The reason for having set to 1 and not 0, is that this 1 sets the minimum bid.
Some of the other tasks have bidding functions that tend toward 0 but never reach it, so this
minimum bid prevent units from being assigned to these tasks.




                                                                                                 47
4.4.2 Attack Unit

       An attack unit tasks can be triggered in a number of ways: when an enemy unit comes
into line of sight or radar range, or when an enemy unit damages a member of the group. The
attack task is not as complicated as the group formation task. One task is generated for each
enemy unit and its only parameters are the target’s ID, type, position at last rebid and current
position. The type is used in the bidding function to calculate the target’s threat level and how
effective group members are at attacking such targets. The ID is used by the handling function
that gives all units assigned to the task the order the attack the target.

       The two positions are used to reduce the amount of rebidding done in attack unit tasks.
Because a group can acquire many attack unit tasks and usually most members of the group have
attack capabilities, rebidding becomes very expensive. To reduce the frequency of rebidding the
task is only deemed to need a rebid when the target moves far enough from the position they
occupied at the last rebid. This distance calculation is done in the update function. If the
distance is large enough then the rebid flag in the task parameters is set to true otherwise it is
set to false. When the bidding function is called after the update function a rebid is performed, if
the flag is true, and the position at last rebid is overridden with the current position of the target.

       The attack unit task is finished when the target is “dead” or “destroyed” and the task
becomes invalid when the target goes out of line of sight. The reason why the task becomes
invalid when the target goes out of sight is that the Spring engine will no longer allow the AI to
calculate the unit’s position if it is not visible. In this case the task cannot be updated or
effectively bid on anymore.

       The pre-calculation table for the attack class (AttackPCT) is a 2D table that contains all the
unit type related calculations for every combination of attaching unit type and target unit type.
This table is especially useful for simplifying the run time check to see if a unit type is capable of
attacking the target. The pre-calculated section of the bidding equations for unit types that have
combat abilities and can attack the target unit type is as follows:




                                                                                                    48
        AttackPCT[attacker type][target type] = ((attacker damage per second to target) +
               (the average damage per second target can inflict)) x
               (attacker’s maximum weapon range) / (attacker’s maximum health) x
               (task weighting)

        If the unit type has combat abilities but is unable to attack a particular unit type (such as a
ground unit that can’t fire at flying units) then

        AttackPCT[attacker type][target type] = false

If the unit type has no combat ability at all then

        AttackPCT[attacker type] = false

The remainder of the attack unit bid is related to the relative positions of attacker and target and
the attacker’s current health. If the target is in range

        Bid = AttackPCT[attacker type][target type] x (attackers current health) /
               (attacker’s max weapon range) x 2

If the target is out of range then

        Bid = AttackPCT[attacker type][target type] x (attackers current health) / (distance)2

The only exception to this is for structures that can’t move. If the target is out of range then the
bid is 0.


4.4.3 Heal Unit

        The heal unit task is created in response to the unit damaged event. The “target” for a
heal unit task may or may not be a member of the group. This means that units capable of
repairing will repair any unit that takes damage near the group. In order to keep a heal unit task
there must be at least one member of the group capable of repairing the “target”, otherwise the
task is dropped. The task is deemed finished when the “target” unit is back to full health.
Updating the tasks involves checking the targets current health and position. If there is a
significant difference in the targets health since the last rebid then the rebid flag is set to true.




                                                                                                    49
When units are assigned to the heal unit task the handling function simply issues an order to
repair the target unit.

       The pre-calculated section of the heal bid is stored in a one dimensional table (HealPCT)
that holds one set of calculations per unit type. The initial equation was:

       HealPCT[unit type] = (repair speed) x (weighting)

And the runtime bid for when the target is in healing range was:

       Bid = (HealPCT[unit type]) x 4 / (targets current health)2

When the target is out of range:

       Bid = (HealPCT[unit type]) / (targets current health)2 / ((distance beyond range) / 20)

       There was a slight issue with balancing in the test system, because of a particular unit
type that possessed the ability to repair units as well as having excellent combat capabilities.
Because this unit was so good at attacking it was never being assigned to the heal unit task while
in battle. To combat this, the bid equation was altered. The alteration uses the knowledge that
units with good combat skills also have high of health levels. While this value is unrelated to a
unit’s ability to heal it was used to raise the heal bid for units with combat capabilities while not
effecting the unit types with little combat capability. So the final bidding function that displayed
reasonable results was altered by changing the pre-calculated section to

       HealPCT[unit type] = (repair speed) x (weighting) x (max health)2

       Using this bidding function in the first implementation of the tasks still had some issues.
Units would never get fully healed because as soon as they had more health than another unit,
the bid would be reduced, and the healer would move to healing the other unit, only to switch
back moments later. To stop this behaviour a units bid for a heal task is never reduced as the
“target” gains health. The result of this is that healers will finish healing one unit unless another
unit looses more health than the current target had lost at its lowest point. This slight change
resulted in much better behaviour.




                                                                                                  50
4.4.4 Flee

       The flee task is created at the same time as the attack task to represent the choice
between fight of flight.   The flee task is slightly more complex than the other tasks because
creating a flee task for each enemy was unnecessary, inefficient and could not support the level
of situational awareness necessary to flee in the right direction. For these reasons instead of
having a flee task generated for each individual enemy unit, one flee task was created for each
enemy unit type. Because the desire to flee is predominantly based on the enemy unit type and
the member’s current health, this was sufficient to generate a suitable bid.

       Similar to the attack unit task, the table of pre-calculated values for the flee task
(FleePCT) is a 2D table for all possible combinations of attacking and fleeing unit types. If the
unit type cannot move then fleeing is not an option so:

       FleePCT[unit type] = false

If the enemy unit type has no attack capability then there is no need to flee so:

       FleePCT[unit type][attacker’s unit type] = false

Due to the nature of the bidding equation, the only component that could be pre-calculated for
all other situations was the damage and task weighting:

       FleePCT[unit type][attacker’s unit type]
       = (attacker’s average damage to fleeing unit type) x (weighting)

The bidding process was made slightly more complicated because the bid was based on the
distance from the closest enemy unit being tracked by the flee task and the current amount of
health lost by the fleeing unit. So each time the bidding function was called it was necessary to
calculate and compare many distances between enemy units and members of the group. This
process was only feasible because the distance between units could be calculated using a call to
the game engine, which is much faster than doing the calculation in Lua script. Once the distance
between the closest enemy unit and each member in the task was calculated the bidding
function was as follows:

       Bid = Flee-PCT[unit type][attacker’s unit type] x (amount of fleeing unit’s health lost)



                                                                                                  51
The initial proposal was to have units flee in the opposite direction to the enemy units within
range of the fleeing unit. Due to a lack of experience with the Spring engine this functionality
could not be achieved. Instead units flee to the group’s current location. In almost all situations
this resulted in satisfactory behaviour. This is because attacking units tended to move away from
the group’s location to attack enemy units as they approached, so the group’s location was
almost always back from the “front line”. In addition to this, because all units on low health fled
to the same location as the healing units (which, having no combat ability, also tend to flee to the
centre of the group), this allowed them to be repaired and move back into the action.


4.4.5 User Task

       This is the most important task of this system. It is this task that allows the player to
override the AI control. For this implementation it was decided that the user orders would be
the highest priority – not even extreme low health levels would allow a unit to override the
player’s order. For this reason the bidding function was simple. For example,

       Bid = 108

A user task is generated for each member of the group that receives a direct order from the user.
When the task is created, only the unit that was given the order is included in the list of units
that can do this task. The task keeps track of all the instructions the player has given to the unit.
The task is notified as the unit finishes each of these orders and the task is deemed finished
when all the orders have been completed or the unit becomes idle. Because the player’s orders
are deemed to be unquestionable there is no need to track the orders that were given. This is
because if the AI does not issue any orders to the unit while they are in this task then they will
behave exactly as they would if the player was playing without the AI.

       The only complication of this task was that it was found necessary to assign the unit to
this task as soon as it was created. This meant breaking the general conventions slightly and
assigning the unit to this task at the end of the create function. This was the only way to
guarantee that the AI would not issue any orders to this unit before the update cycle was
through the reassignment stage.




                                                                                                  52
       So with the addition of this very simple task, where the bidding, handling and pre-
calculations functions contain nothing, the player is free to control units exactly as they would if
the AI was not present.




                                                                                                 53
6 Results

       Two types of test were run to establish the viability and effectiveness of the AI. To test
the reduction in micromanagement the number of mouse clicks was recorded each minute over
a number or rounds of PURE. The mouse clicks for runs that did not use the AI are compared to
runs where the AI was used.

       To test the viability of the AI, the percentage of game time spent doing AI calculation was
recorded and plotted against a measure of growth in the number of bid. In addition to these
measurements, the visual behaviour is also discussed, because a subjective evaluation was done
to ascertain whether the assigned groups appeared to behave intelligently “as a group”, game AI
needs to be more than just efficient and effective, it must look intelligent too.


6.1 Efficiency Testing

       Most of the processing time for the AI is spent calculating and recalculating bids for all
the tasks for each member of the group. For the efficiency testing, when multiple groups were
used at once, groups of approximately the same size were used to simplify calculations. In
practice this would commonly be the case as players would tend to manage the game by dividing
their forces into roughly equal sized platoons or squads. Players could, of course, vary group size
greatly, but the use of similar sized groups recued the overhead of calculations involved in the
testing.

       The overhead in tracking the exact number of bid at any given time was too large to
generate meaningful efficiency results. As a compromise the total number tasks and units across
all groups and the number of groups was used to approximate the number of bids. For each
group the total number of bids (B) grows as a function of members (M) in the group times the
number of tasks (T).

       B∝MxT

       With multiple groups of approximately the same size the total number of bids across all
groups is the number of members multiplied by the number of tasks divided by the number of



                                                                                                54
groups. If Mi is the number of units in group i and Ti is the number of tasks then the total
number of bids (Btotal) across 3 groups (G = 3) can be approximated to:

       Btotal ∝ M1 x T1 + M2 x T2 + M3 x T3

       M1 ≈ M2 ≈ M3                              (assuming same group size)

       ∴ Btotal ∝ M1 (T1 + T2 + T3 )

       M1 + M2 + M3 = Mtotal

       => 3M1 = Mtotal

       => Mtotal / 3 = M1

       ∴ Btotal ∝ (Mtotal / 3)(T1 + T2 + T3 )

       => Btotal ∝ (Mtotal / G)(T1 + T2 + T3 )

This is not an accurate approximation of the bids but is sufficient to display the relationship
between the growth in the number of bids and the efficiency of the AI. The time spent
calculating in the AI was recorded by keeping a running total of the amount of time spent in all AI
functions. Then at the start of each update cycle the time spent in the AI and the total elapsed
game time was stored in a table along with the number of units in the AI groups, the number of
currently running tasks and the number of groups. This table was then stored to a file and
reloaded over many runs of PURE to collect many samples. These times were then plotted the
statistics package SPSS to determine the line of best fit using a linear regression test analysis.




                                                                                                     55
  Figure 6.1 Relationship between Time Spent in AI Calculations and Approximate Number of Bids



       There was a significant linear fit of percent of time spent in AI on the approximate growth
in the number of bids (figure 6.1; Regression ANOVA F1,404 = 565.7, P < 0.001; the approximate
number of bids accounts for 58% of the time spent in AI). The regression equation is ‘time spent
in AI (%) = 0.847 +0.001(approximate number of bids)’.

       A clear correlation can be seen between the time spent in AI and the approximate of bids
(calculated using the number of tasks, units and groups). After the alteration of pre-calculating
much of the bid calculations the AI never used more than 8% of the processing time, on average,
between updates.




                                                                                                 56
6.2 Reduction of Micromanagement

       Giving orders in RTS games generally involves a lot of mouse clicking. PURE is no
exception. To test the reduction in micromanagement many games were played with and
without using the AI, recording the number of mouse clicks every minute of the game. A
consistent strategy was used in the same map against two AI opponents. The first ten minutes of
the game showed no reduction in the number of clicks because this time was spent base building
and there is no use for group AI during this stage. So the comparison in clicks starts from 10
minutes into the game when the first attacks were sent against, or received from, the opposing
teams. When the AI was not being used the aim was to try to gain equally effective behaviour
through micromanagement of the units in combat.




Figure 6.2 Relationship between Number of Clicks per minute and Elapsed Game Time without Group AI



       Without the AI there is a relationship between the number of clicks on the elapsed game
time. (figure 6.2; Regression ANOVA F1,41 = 12.3, P < 0.05; the elapsed game time accounts for
23% of the variation in number of clicks per minute). The regression equation is ‘Number of
clicks = -1.59 + 0.95(elapsed game time (mins))’.

                                                                                               57
       Under normal conditions, there is a clear relationship between the amount of mouse
clicks and the number of minutes into the game in games where the AI was not used. This is
because the number of units in the game and the complexity of attacks increases as the game
develops. With the AI in use this relationship is not seen: the number of clicks does not increase
over time because the player needs to give very fewer orders to gain effective behaviour from
the groups.




 Figure 6.3 Relationship between Number of Clicks per minute and Elapsed Game Time with Group AI



       With the AI there is no relationship between the number of clicks on the elapsed game
time. (figure 6.3; Regression ANOVA F1,45 = 0.22, P > 0.05; the elapsed game time accounts for
1% of the variation in number of clicks per minute). The regression equation is ‘Number of clicks
= 3.49 + 0.06(elapsed game time (mins))’. This suggests that the AI successfully prevented the
rise in the burden of micromanagement as the game develops.




                                                                                               58
6.3 Aesthetics

       Aesthetic appearance of IA behaviour is important in games because this greatly
increases the enjoyability of the game. If units appear to be acting intelligently then the player
can get much more immersed in the game (which is one measure of satisfaction). The units
under the control of the AI, unfortunately did not satisfy this condition. The coordination of
movement between units in the group was very haphazard. A more complex handling function
in the group formation task was going to be trialed to reduce this chaotic and unstructured
movement behaviour proved. Without a detailed understanding of the path planning and
collision handling in Spring this proved too difficult to implement in the time available.

       Another aspect of the AI that made unit behaviour seem chaotic and unnatural is that
when bids for many tasks were hovering around the same value units switched rapidly between
them as they were assigned to the tasks that has the highest bid as soon as it became higher
than the task they were assigned to. This didn’t seem to significantly adversely affect their
effectiveness for the simple tasks that were used in this implementation, but would probably
become an issue if handling functions became more complex.




                                                                                               59
7 Discussion and Future Work

        There is a considerable amount of work that could potentially improve and develop AI
assistants for RTS players. Issues with the current AI implementation and theoretical structure
and proposed solution that could be explored will be covered here. Also covered in this chapter
is: the potential for research to build on this system, how the system could be improved to make
it more user friendly and how learning and AI personalities could be implemented as a feature of
game play.


7.1 Reduction in Micromanagement

        The click tests run with and without the AI show that the AI definitely reduces
micromanagement. However the extent to which the AI would reduce micromanagement for
other players is still in question. The limited selection of tasks implemented was tailored to solve
micromanagement issues of high concern for one particular player and the task weightings were
adjusted to meet that person’s playing style. Whether this structure could be generalised to suit
all players is yet to be tested.

        Allowing users of the AI to alter task weightings at run time could greatly improve the
flexibility of the AI. If a player wants a group to scout about but flee at the first sign of battle
then they could raise the flee weighting and reduce the attack weighting. Allowing users to alter
the task weighting would also mean that many more tasks could be implemented without raising
concerns that some potential players (customers) would disapprove. If a player does not want a
group to attack at all or remain in any particular location they could simply reduce the weighting
to 0 which would be read by the AI to mean that these tasks need not even be added to the list
for this group.


7.2 Optimizations

        The number of bids increases with the number of tasks and members in groups. These
two variables typically increase together. This causes the number of bids to increase at a rate
similar to a quadratic function: doubling the number of units and tasks increase the number of


                                                                                                 60
bids by a factor of 4. While the AI does not use much computation time in its current form, to be
more useful it would need to contain far more task types. So one of the most concerning
efficiency issues with the AI, is that the computation time grows with the number of bids.

       It may be possible to reduce computation and make behaviour tuning easier by grouping
tasks under larger tasks to form task trees. This would create a small number of large tasks
which units are assigned to, and then causing units to bid on the lower level tasks within that
larger task. By making a task tree that units traverse by biding at each level to see which branch
they are assigned to, the growth rate of bidding calculations could be reduced to a more
logarithmic growth.

       One of the largest consistent calculations in the bids was calculating the distance
between a unit and a location on the map. Distance between two units could be obtained using
a function defined in the game engine, but there was no function found that calculated the
distance between a unit and a point. As a result this calculation had to be done many time in Lua
Script which is much slower than a call to the engine that does the calculation in compiled c. If
common calculations like this, that must be done regularly in the bidding process, are
implemented in the game engine the AI could run much faster.

       The update cycle was altered to only run once per frame when in play mode and a rolling
counter was implemented so that only one group was updated in each update cycle. Even with
10 separate groups this reduced update rate made little difference to the effectives of the AI.
However, with any higher number of groups, reaction times for the units did start to noticeably
slow down. This was not a problem in these tests, as there was never a need to have more than
about 5 to 8 groups in the scenario used, but could become an issue in more complex scenarios.
A more complex update scheduling scheme would have to be implemented to make sure all
groups were updated at a reasonable frequency.


7.3 Behavioural Improvements

       Because the task creation in the AI is entirely reactive there are some situations that arise
where a group that is formed in the middle of a complex situation behaves poorly. This is due to
the group not creating tasks because the trigger that would have created them happened before


                                                                                                 61
the group was formed. The task that expressed this most severely was the heal unit task.
Groups that were created with members that were currently damaged did not create heal unit
tasks until those units received additional damage. There are two possible paths that could be
explored to solve this problem. The first is to perform an initial situation analysis that creates all
the currently relevant tasks for the group as it is being formed. The other solution is to have the
AI create and track a selection of tasks independent of any group, and pass the relevant tasks to
groups as they are formed. Both these strategies would have to be explored to see how
computationally intensive they are and how much they complicate the AI.

       It is important in computer games that actions taken by the AI look intelligent to the user.
In the current implementation, units appear to be too fickle: when task bids are close they switch
tasks too rapidly. A commitment buffer to prevent units from changing task too rapidly may
help, but would have to be carefully selected so the units don’t commit too much. Additional
computation involving the time the unit was assigned to their current task may be needed so
that after a given time the unit is no longer committed to the current task.

       The addition of more tasks and improving current task implementations would make
groups more useful.

     Tasks such as defend, could improve the behaviour by making units defend other members
      of the group as they get low in health.

     Instead of having a flee task, a get healed task would improve behaviour because units
      would seek out healers and only retreat from battle if there is a healer in the group, which
      is better than retreating when there is no benefit in doing so.

     A construction task could be added that is triggered when a player gives a construction
      order to a member of the group. This could be used to make other builders help in
      construction. There could be the option of setting the bid value low so that workers flee
      when a threat approaches and get back to constructing when the threat is passed. This
      would prevent workers from being killed trying to construct defensive structures and
      would ensure that even if an attack destroys a half complete structure it will be rebuilt.

     A move in formation tasks similar to the formations seen in Age of Empires 2 (Microsoft,
      1999) would be ideal but is probably worthy of a thesis in itself.

                                                                                                   62
        One of the most serious shortcomings of the system became apparent with military units
that have slow moving projectile weaponry. The problem was that all the units of the same type
would frequently target a single enemy unit because that task received the highest bid for all of
them. This resulted in many of the projectiles being wasted because the target was destroyed
before they arrived, while other enemy units had taken no damage at all. This could be solved
with the addition of task satisfaction quotas. Once there are enough units assigned to a task to
fill the quota no more units will be assigned. This would add an extra level of complexity to task
definitions and devising a method to calculate how much a single unit contributes to the quota
would be difficult to define. It is more than likely that these complications could be simplified by
settling for a little overkill.


7.4 AI personalities and learning

        There are many existing possible algorithms for learning that can potentially be applied to
games AI (Baekkelund, 2006a). As the objective is to reduce micromanagement by providing a
low level AI working for the player, applying learning was beyond the scope of this thesis but was
kept in mind to allow for the implementation of learning in future developments. It is likely that
the bidding equations would lend themselves well to being implemented as simple neural-
networks. The problem is creating fitness functions and a sufficiently comprehensive set of
learning scenarios. If a fitness function could be defined it is likely that a better bidding function
could be developed directly using the knowledge needed to create the fitness function.

        An interesting possibility would be to have the group AI learn from the player. When the
AI does something the player does not approve of, the player will step in a give orders to those
units to correct their behaviour.       If a suitable level of situational awareness could be
implemented to allow the AI to adjust its behaviour to mimic the player’s override orders in
similar situations then the AI would customize itself to the style of the player and become an
invaluable game “companion”.

        If this AI was to be worked into a game, it could be used to create “personalities”. This
idea is similar Tan & Cheng (2008) who refer to their agents as developing personaleties. These
personalities could then be attached to a selection of “officers” that the player can choose from
and assign to groups of units. The personalities would simply be different configurations of

                                                                                                   63
weightings and possibly a limited selection of tasks that are applied to the groups that this
personality is assigned to. A hot headed macho character may have an inflated attack weighting
and a reduced flee weighting; while a base constructer may have a reduced attack weighting and
an inflated heal weighting. These AI personalities could add a unique element to the playing
experience




                                                                                           64
8 Conclusions

       The Group AI model produced here is effective in reducing the level of micromanagement
as shown by the number of click per minute.                The Group AI prevented the level of
micromanagement increasing as the RTS game became more complex over time. Less effort was
required to coordinate attacks on opponents involving many units. Reducing the number of
repetitive orders that needed to be given to units to achieve effective behaviour made it easier
to focus on monitoring the battle as a hole.

       This exercise has shown that the group AI, with units bidding for tasks, is computationally
viable, and can be loosely coupled to the RTS it is implemented in, if the tasks are kept fairly
simple. As task definitions and bidding functions get more complex it is likely that they will
become more closely tied with the specific game. A considerable advantage to note is that task
creation is very modular, so while not all tasks implementations may be applicable in future
games it is more than likely that many of the tasks could be adapted to different games with
minimal effort.

       The challenging factor of implementing this AI is the balancing of bidding functions. This
is also, unfortunately, the section of the architecture that is most closely bound to the specific
game. General bidding formulas could be devised for common tasks and rules could be made
that guide the creation of new bidding formulas, but balancing a complex web of task bids could
require many hours of testing and tuning. An important aspect to note about balancing is that
not all tasks need to be balanced in relation to one another. Specialist tasks, that only apply to
specific unit types, only need to be balanced in respect to other tasks that those unit types can
take. So specialist tasks for using spells or special abilities can be added relatively easily.

       The flexibility of this AI system should overcome any fear that it will detract from the
playing experience. The AI only serves to extend the options for unit control. Players are free to
use the AI in situations where they believe it is most useful. The ability to override the AI means
that players can chose to have as much or little input into the fine details of individual unit
action. The implementation that was completed for this thesis shows that the system was a
success and has considerable potential for improvement.




                                                                                                  65
References

Baekkelund, C. (2006a). A Brief Comparison of Machine Learning Methods. In S. Rabin (Ed.), AI Game
Programming Wisdom 3. Boston, Massechusetts: Charles River Media. pp. 617-631.

Baekkelund, C. (2006b). Academic AI Research and Relations with the Games Industry. In S. Rabin (Ed.), AI
Game Prorogramming Wisdom 3. Boston, Massechusetts: Charles River Media. pp. 77-88

Blizzard Entertainment. (1994). Legacy. Retrieved october 24, 2009, from Blizzard Entertainment:
http://us.blizzard.com/en-us/games/legacy/

Bowling, M., Furnkranz, J., Graepel, T., & Musick, R. (2006). Machine learning and games. Machine
Learning , pp. 211-215.

Buro, M. (2009). ORTS. Retrieved October 24, 2009, from http://www.cs.ualberta.ca/~mburo/orts/

Buro, M., & Furtak, T. M. (2004). RTS Games and Real–Time AI Research. University of Alberta, Edmonton,
Computing Science.

Cerpa, D. H. (2008). A Goal Stack Based Architecture for RTS AI. In S. Rabin (Ed.), AI Game Prorogramming
Wisdom 4. Boston, Massechusetts: Charles River Media. pp. 457-466.

Cerpa, D. H., & Belleiro, J. (2008). An Advanced Motivation-Driven Planning Architecture. In S. Rabin (Ed.),
AI Game Prorogramming Wisdom 4. Boston, Massechusetts: Charles River Media. pp. 373-382.

Cornelius, R., Stanley, K. O., & Miikkulainen, R. (2006). Constructing Adaptive AI Using Knowledge-Based
NeuroEvolution. In S. Rabin (Ed.), AI Game Prorogramming Wisdom. Boston, Massechusetts: Charles River
Media. pp. 693-708.

Earl, M. G., & D'Andrea, R. (2007). A decomposition approach to multi-vehicle cooperative control.
Robotics and Autonomous Systems , 55 (4), pp. 276-291.

Erol, K., Hendler, J., & Nau, D. S. (1996). Complexity results for HTN planning. Annals of Mathematics and
Artificial Intelligenc , 18, pp. 69-93.

Gares, S. (2006). Extending Simple Weighted-Sum Systems. In S. Rabin (Ed.), AI Game Programming
Wisdom 3. Boston, Massechusetts: Charles River Media. pp. 331-339.

Grindle, C., Lewis, M., Glinton, R., Giampapa, J., Owens, S., & Sycara, K. (2004). Automating Terrain
Analysis: Algorithms for the Intelegence Preparation of the Battlefield. HUMAN FACTORS AND
ERGONOMICS SOCIETY 48th ANNUAL MEETING. New Orleans, Louisiana, USA. pp. 533-537.

Hagelbäck, J., & Johansson, S. J. (2008). Using Multi-agent Potential Fields in Real-time Strategy Games.
International Conference on Autonomous Agents: Proceedings of the 7th international joint conference on
Autonomous agents and multiagent system, 2. Estoril, Portugal: International Foundation for Autonomous
Agents and Multiagent Systems. pp. 631-638.

Ierusalimschy, R., Celes, W., & De Figueiredo, L. H. (2008). Lua Authers. Retrieved October 22, 2009, from
http://www.lua.org/authors.html

                                                                                                          66
McPartlan, M. (2008). A Practical Guide ti Reinfircement Learning in First-Person Shooters. In S. Rabin
(Ed.), AI Game Programming Wisdom. Boston, Massechusetts: Charles River Media. pp. 671- 683.

Munoz, H., & Hoang, H. (2006). Coordinating Teams of Bots with Hierarchical Task Network Planning. In S.
Rabin (Ed.), AI Game Programming Wisdome 3. Boston, Massechusetts: Charls River Media. pp. 417-427.

Nilsson, N. J., & Fikes, R. E. (1970). STRIPS: A New Approach To The Application of Theorem Proving to
Problem Solving. Stanford Research Institute, Menlo Park, California.

Nvidia. (2009). Physx. Retrieved October 25, 2009, from Nvidia:
http://www.nvidia.com/object/physx_new.html

Orkin, J. (2004). Applying Goal-Oriented Action Planning to Games. In S. Rabin (Ed.), AI Game
Programming Wisdom. Boston, Massechusetts: Charles River Media. pp. 217-227.

Pittman, D. (2008). Command Hierarchies Using Goal-Oriented Action Planning. In S. Rabin (Ed.), AI Game
Programming Wisdom 4. Boston, Massechusetts: Charles River Media. pp. 383-391.

Ponsen, M., Spronck, P., Muñoz-Avila, H., & Aha, D. W. (2007). Knowledge Acquisition for Adaptive Game
AI. Science of Computer Programming , 67 (1), pp. 59-75.

Sailer, F., Lancot, M., & Buro, M. (2008). Simulation-Based Planning in RTS Games. In S. Rabin (Ed.), AI
Game Programming Wisdom 4. Boston, Massechusetts: Charles River Media. pp. 405-418.

Smith, R. G. (1977). The CONTRACT NET: a formalism for the cotrol of distributed problem solving.
Proceedins of the 5th International Joint Conference on Artificial Intelegence. Cambriidge, MA.

Spronck, P. (2006). Dynamic Scripting. In S. Rabin (Ed.), AI Game Programming Wisdom 3. Boston,
Massechusetts: Charles River Media. pp. 661-675.

Spronck, P., Ponsen, M., Sprinkhuizen-Kuyper, I., & Postma, E. (2006). Adaptive game AI with dynamic
scripting. Machine Learning , 63 (3), pp. 217 - 248.

Spronk, P., & Ponsen, M. (2008). Automatic Generation of Strategies. In S. Rabin (Ed.), AI Game
Programming Wisdom 4. Boston, Massechusetts: Charles River Media. pp. 659-670.

Stene, S. B. (2006). Artificial Intelligence Techniques in Real-Time Strategy Games - Architecture and
Combat Behavior. Norwegian University of Science and Technology, Computer and Information Science.
Institutt for datateknikk og informasjonsvitenskap.

Tan, C. T., & Cheng, H.-l. (2008). A Combined Tactical and Strategic Hierarchical Learning Framework in
Multi-agent Games. 2008 ACM SIGGRAPH symposium on Video games. Los Angeles, California: ACM. pp.
115-122.

Technodynamic. (2008). Herzog Zwei. Retrieved October 24, 2009, from Technodynamic:
http://www.technodynamic.com/Herzog.html

The Spring Community. (2009). The Spring Project. Retrieved October 24, 2009, from
http://springrts.com/



                                                                                                           67
The Wargus Team. (2007). Wargus. Retrieved October 24, 2009, from http://wargus.sourceforge.net/

Thomas, D. (2004). New Paredigmes in Artificial Intelegence. In S. Rabin (Ed.), AI Game Programming
Wisdom 2. Boston, Massechusetts: Charles River Media. pp. 29-39.

Van Lent, M., Carpenter, P., McAlinden, R., & Tan, P. G. (2004). A Tactical and Strategic AI Interface for
Real-Time Strategy Games. 2004 AAAI Workshop: Challenges in Game Artificial Intelligence .

Van Lent, M., Fish, W., & Mancuso, M. (n.d.). An Explanable Artificial Intelligence System for Small-unit
Tactical Behaviour. IAAI Emerging Applications. pp. 900-907.

Whizzer. (2002). History of RTS Games. Retrieved October 24, 2009, from Planet Command and Conquer:
http://planetcnc.gamespy.com/View.php?view=Articles.Detail&id=524#page_index2.shtml

Yue, B., & deByl, P. (2006). The State of the Art in Game AI Standardisation. 2006 international conference
on Game research and development. Perth, Australia: Murdoch University, Australia, Australia. pp. 41 –
46.




                                                                                                             68

								
To top