Docstoc

AI

Document Sample
AI Powered By Docstoc
					Designing AI
(character behaviour)

  Game Design
  Vishnu Kotrajaras, PhD
Challenge the players

   Shooting: overwhelm by opponents
    – AI can be easy, the numbers will provide the
      challenge anyway
   First-person-shooter:
    – player can run out of ammo and can’t see in the
      dark (enemies do not have this disadvantage)
    – Also, there are lots of enemies
    – Therefore, each of them does not have to be
      smart.
    – Player will still be challenged at this difficulty
Challenge the players(2)
 RTS: AI has to do everything the player
  does, and has to seem smart
 We can help it a bit by
    – AI can see the entire map
    – AI has a larger number of starting unit
    – AI has a bigger pool, or easier way to get
      resources
Challenge the players(3)
   Turn-based strategy game:
    – Most difficult to make AI
    – Chess!
    – Mostly, the AI cheated a bit….
Do not do stupid things
   Do not just fall off a cliff…err..that’s Lemmings
   If the enemy is supposed to be human, make them
    clever
    – If you can’t make them clever, just make them look like
      robots!
   Different acceptable levels for different games
    – Metal Gear Solid, enemies are clever but their eyesight are
      extremely short
    – But if they have the same viewing range as you… the game
      will be too difficult
Be unpredictable
   Players want AI to try to defeat them in ways players
    never think of
   Of course, games can surprise players with story,
    bosses, next level.
   Ai adds replayability
   Uses random number and probability (weight) for
    choosing possible actions
    – This is fuzzy decision making
    – Some choice of actions may seem stupid, but real human
      does this things when panicked
    – The player will be the one that thinks the AI is clever
   Unpredictability must not break other AI goals
Assist Storytelling
   For example, NPC in a particular town runs
    away from strangers
    – This makes the player wants to investigate
   AI of team mates helps telling the story of
    each character
    – Action, speech
   Dynamic storytelling
    – Players can influence the NPC’s mood
    – Action somewhere else in the world may affect his
      attitude towards the player
Create a living world

 Add life that do not directly interact with
  players
 Ambient life
    – Birds
    – Cats
    – Dogs
Should the AI not have any
advantage?
 AI may not be challenging anymore, because
  naturally, man is more clever
 Programmers will spend too much time on AI
 Remember, the goal is not the use of AI
  techniques, but the challenge the player can
  get
 Players love to beat tons of enemies, it feels
  great. Therefore few clever enemies may not
  be good enough
Too real?

   AI that runs away when wounded
    – If the AI is faster, or navigate better, the
      player will never outrun it
    – Now it is a chase game, forever chase
   AI that always cover weak points when
    players shoot
    – No way to win!
AI and levels
   Must be developed together
   AI that works on every level is hard
   Problem is, you may have 2 teams, one working on
    AI and the other on levels
    –   Must work together
    –   Create simple AI that works in some situation
    –   Then level designer create a level based on this simple AI
    –   Keep looking at each other’s work and communicate.
    –   Both must adjust their work
    –   Levels may have to sacrifice some coolness in order for AI to
        work well
         • AI may not be able to swim, so the level must not have river
         • Must prevent NPC getting stuck at corner
Good enough?

   Teammates AI
    – Don’t fail the player
    – Especially, don’t fail when the player need
      it the most
scripting

 Use dynamic AI, together with pre-made
  script
 Half-life
    – Use scripted path (rule-based behaviour)
    – The AI do things depending on situation
      (reacting)
    – Tester is the one who work out what the AI
      should do
Dynamic scripting
   Maintain several rule bases, one for each opponent
    type.
   Each time a new opponent is encountered, rules are
    extracted from the rule base of that creature type.
   The probability that one rule is selected depends on a
    weight value that is associated with each rule.
   Weights of all rules are updated after a completion of
    an encounter.
    the order which the rules are placed depends on
    application.
Advantage of dynamic scripting

   Fast
    – Extraction of prepared rules from rule base.
    – Update weight only once per encounter.
   Still intelligent
    – Every rule is from the prepared rule bases.
   Robust
    – Rules are not removed immediately when
      punished.
Example of dynamic scripting

 Opposing party encounter (AI and
  player)
 Each party has 2 fighters and 2 wizards
  of the same level.
    – Weapons are static.
    – Can select 2 types of potion.
    – Wizards know 7 spells.
The dynamic scripting language
   (Optional conditional statement) and an action.
   Conditional statement
     – One or more conditions.
     – Combined with logical AND or OR.
   Condition examples
     – Distance separating characters.
     – Character’s health.
     – Spell effects currently on characters.
   Actions (5 actions in this experiment)
     –   Attack an enemy.
     –   Drink potion.
     –   Casting spell.
     –   Moving.
     –   Passing the turn.
The dynamic scripting language(2)
   Things can be specific
    – Cast spell “magic missile” at closest
      enemy wizard.
   Or general
    – Cast any offensive spell at random enemy.
   Or somewhere in between
    – Cats the strongest damaging spell
      available at the weakest enemy.
 Rules are executed in sequential order
  (condition of that rule must also satisfy.).
 If at the end of all chosen rules, no action
  is selected, a pass turn is chosen as a
  default action.
 Chosen rules are selected according to
  their corresponding weight.
 Chosen rules are ordered according to
  their priority. If they have equal priority,
  then they are ordered by their weight.
Total rules in Baldur’s gate
experiment
   Fighter
    – 20 rules in total.
    – The program choose 5 rules to form a
      script.
   Wizards
    – 50 rules in total.
    – 10 are chosen to form our sequence rule
      script.
The weight update function
   Party fitness
    – Range 0..1
    – 0 if party has lost the fight.
    – 0.5 + half the average remaining health of
      all party members.
F ( p)  0,{n  p | h(n)  0}
                         h ( n)
F ( p)  0.5  0.125           ,{n  p | h(n)  0}
                    n p mh(n)

                                      Health of “n” at the end
    Health of “n” at the start of the fight
   Character fitness
    – Range 0..1
    – Based on
       • Avg. remaining health of the party.
       • Avg. damage done to the opposing party.
       • Remaining health of the character (if character
         dies, we use the time of death instead).
       • Party fitness.
Time of death




   Large reward to party victory.
   Smaller reward to the individual’s survival.
   Even smaller reward for friend’s survival and the damage
   they did to the enemy.
The actual weight update function

 Translated from character fitness.
 Only the rules executed were rewarded or
  penalized.
                    Max. penalty    Break even point




      Max. reward             Max. weight value
In the trials

 MP=30
 MR=100
 MW =2000
 b=0
 All weights are originally 100
   Static scripts to fight against
    – Must get changed (to reflect player’s change in
      strategy).
    – Offensive
       • Fighter attacks nearest enemy.
       • Wizards use most powerful spell to attack weakest
         enemy.
    – Disabling
       • Fighter first drink potion that prevent status effects, then
         attack the nearest enemy.
       • Wizards use all spells that can disable enemies for a few
         rounds.
    – Cursing
       • Fighter attacks nearest enemy.
       • Wizards try all spells that harm enemy.
    – Defensive
       • Fighter drinks potion that reduce damage first, then
         attacks nearest enemy.
       • Wizards use all defensive spells.
Composite tactics

   Random party tactics
    – Select 1 from 4 tactics at each encounter.
 Random character tactics
 Consecutive party tactics
    – Continue using the same party strategy if
      still win.
    – Most resemble human tactic.
Test run

   Basic tactics
    – 21 tests, each test we run until we find
       • Average turning point (the first time that
         dynamic scripting win for 10 consecutive
         encounters).
       • Absolute turning point (first encounter which,
         following this encounter, consecutive loses of
         dynamic script is never more than consecutive
         win).
                           Avg is much higher than the median due
                           to rare occurrences of high turning
Easy to defeat             points (early divergence makes unlearn
                           hard).




                 Hardest to defeat, but still learn to defeat it.
http://www.cs.unimaas.nl/p.spronck/
Current NPC
 Do things only in the area that a player is
  active.
 Static agent
    – wait for player to interact with it.
    – Or play through a scripted sequence.
   Proactive Persistent agents (proposed in the
    paper)
    – Have needs, beliefs, desires of their own.
    – Can initiate action on their own.
    – They are modeled regardless of a player’s
      location in the game.
PPA requirements

 Believable
 Configurable
 Scalable
    – Only 1 or 2 is enough. The rest can be old
      static.
   Able to switch mode
Current agent architectures
   Reactive (like if-then-else), mostly
    implemented by FSM.
    – advantages
       •   Very fast.
       •   Simple to program and understand.
       •   Completely under control.
       •   Require little support from infrastructure.
    – Disadvantages
       • Must try to code every possible situation that it
         will encounter.
       • Stupid.
   Deliberative
    – Build model of he world.
    – Form plans to achieve goals.
    – Prolog-like
    – But
       • Maybe very slow to decide things.
       • Require constant maintenance of a knowledge
         base.
          – Cannot be used in fast changing game.
   Hybrid agent
    – Use reactive for time-critical behavior, such
      as avoiding collision.
    – Long term planning can use deliberative
      system.
PPA requirement in detail
   Autonomy
   Social ability (human-like)
   Reactivity
   Pro-activeness
   Persistence
   Scalability
   Extendability
   Believability
   configuability
Proposed architecture

   System composed of
    – Behaviour system
    – Social system
    – Goal based planner
    – Schedule
What agent experiences in the world
           Ordinary behaviour
                          Human-like
                          communication
                          & relationship
                          between
                          characters




      A task for an agent at particular time
 Sweetser’s PhD. Thesis
 Intelligent AI – adaptable to game
  environment in a strategy game.
 Each cell on the map has low level properties
  (Physics)
    – Heat capacity, ignition point and burning rate.
    – Wetness and fluid flow.
    – Pressure flow.
   Explosion on a map will now cause natural
    fire. River realistically blocks out the fire.
   Object has
    – Low level properties just like a cell. (Object is treated
      as a cell, but has only 1 adjacent cell, that is, the cell
      it is in.)
    – High level properties
       • Structures- is_open, has_volume
       • So an object can contain water, pressure, etc. realistically.
   Agent
    – Has low and high level properties like an object, but
      can move.
    – Calculate comfort value of its current cell and its
      surrounding cell.
       •   <0.1 -Comfortable
       •   0.1-0.3 -Slowly move to a more comfortable cell.
       •   0.3-0.6 –distress and run from the cell.
       •   >0.6 –panic and run
Neighborhood size =1

                       Neighborhood size =2
Agent experiments

   Exp1 neighborhood size
    – 1: can avoid immediate danger, but get
      stuck in a larger hazard. Or it runs into
      another danger.
    – 3: picking good destination. More agents
      move to the same destination.
      • But run through hazards.
   Exp2 optimising agent navigation
    – First select a goal, using neighborhood size =3.
    – Then evaluate cells in its immediate
      neighborhood, using comfort value and distance
      from the goal.
       • Comfort 50%, distance 50%- appear intelligent
       • Comfort 25%, distance 75%- still run through hazard
       • Comfort 75%, distance 25%- appear stupid, moving
         randomly
   Exp3 combing comfort and desire
    – Propagate the desire out from the goal
      position, with a factor of 0.7
                      4.9


                7     7     7


          4.9   7     10    7    4.9


                7     7     7


                      4.9
   Next, an agent calculate comfort values for neighborhood size =3.
   Agent select the best cell to move to, based on desirability and
    comfort.
     – 50% desire, 50% comfort
     – 75% desire, 25% comfort
     – 25% desire, 75% comfort
   Agent then chooses a cell to move to in its immediate
    neighborhood (comfort 50%, closeness 50%)
   Result
     – 50% desire, 50% comfort
         • best, but only about half the agents found the goal, as they opted for
           comfort over goal.
     – 75% desire, 25% comfort
         • Appeared less intelligent, moving randomly. Only about half the agents
           found the goals, why????
     – 25% desire, 75% comfort
         • More cycles to converge. No significant difference between the number
           of agents that found the goal, compared to previous results.
   Increasing propagation constant to 0.8 helps improve agents.
     – Agents can wait for hazards to pass before moving further.
     – But the agents were still not successfully able to find the goal.
     – Why????????
    Answer by Sweetser
   It's not that the agents don't "survive", but rather that they
    stop in a cell and do not reach the goal.
   This means that they cannot see a better cell near them to
    move towards and thus stay in their current location.
   The reason why more agents don't reach the goal is due to
    the fact that they are considering both the dangers of the
    environment and trying to reach the goal.
   There are also lots of dangers for the agents to face on the
    map.
   So more agents would reach the goal with a higher desire
    weighting or if the map were less dangerous.
   I think it would be interesting to include some real-
    time adaptation for the agents and certainly basing
    their decision on their attributes would make
    sense.
   Last exp: multiple goals
    – Agents were much better in finding goals.
       • But maybe because there are more goals within the same
         physical area.
       • Or the desirability of many goals accumulate, allowing
         influence to propagate further.
    – But did not appear as intelligent or realistic.
   What has not been done from this thesis.
    – Agent can run away, but it does not use
      environment any more than that, such as burning
      forests or using water to put out fire. (There are no
      moving objects in Sweetser’s thesis.)
    – Maybe adding objects that can be moved between
      cells (by agents) is another step.
    – Agent sees only cell, not objects in the cell.
    – It will be better if we can have agents neutralizing
      hazards.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:14
posted:10/1/2011
language:English
pages:54