Modeling Teamwork as Part of Human-Behavior Representation
Thomas R. Ioerger
Department of Computer Science
Texas A&M University
Funding provided by: Army Research Lab, Aberdeen, MD.
Teamwork is an important but often over-looked component of human behavior representation (HBR).
Many of the activities in modern military combat operations involve teamwork. Teams exist both within
well-defined units, as well as cutting across echelons. For example, a command staff at battalion and
brigade levels consists of multiple officers working together to help the commander make decisions, and
interactions and coordination between officers in the same staff section (e.g. G2/S2-Intelligence) at different
levels can also be characterized as teamwork. At the lowest level, an infantry platoon, reconaissance squad,
or even tank crew works as a team to achieve a variety of tactical objectives. At the highest level,
combined-arms operations and joint operations rely on integrating various assets and capabilities for
maximum effectiveness. Teamwork is also needed to avoid unfortunate outcomes such as friendly fire
accidents. Multiple documented instances of fratricide have been attributed to break-downs in sharing of
information and coordination of authority across different units working independently, even though all the
information necessary to avoid the tragedy was available (Snook, 2002). In this paper, we provide an
overview of current research on teamwork, with a focus on aspects relevant to human behavior
representation in simulations (especially in the domain of military combat). We also discuss the use of
intelligent agents for modeling teamwork in these simulations, and we identify challenges for future
What are Teams? What is Teamwork?
Teams are more than just a collection of individuals pursuing their own goals. A commonly accepted
definition of teamwork is a collection of (two or more) individuals working together inter-dependently to
achieve a common goal (Salas et al., 1992). The structure of a team may range from rigid, with clearly
defined roles and a hierarchical chain-of-command, to flexible, where individuals all have similar
capabilities, tasks are allocated flexibly to the best available team member, and decisions are made jointly by
consensus. While some teams are formed and exist only transiently, other teams are more persistent,
training together and operating over an extended duration to solve a series problems or perform many tasks.
The notion of shared goals is essential to teamwork because it is what ties the team together and induces
them to take a vested interest in each other’s success, beyond acting in mere self-interest. Members of a
team do not just act to achieve their own goals, possibly at the expense of others, but rather they look for
synergies that can benefit others and contribute to the most efficient overall accomplishment of the team
goal. In addition to this positive cooperativity, members of a team also have incentive to actively try to
avoid interfering with each other. Furthermore, commitment to shared goals leads to other important team
behaviors, such as backing each other up in cases of failure. For example, if one team member assigned to
do a task finds that he is unable to complete it, other members of the team are willing to take over since they
ultimately share the responsibility. This produces a high level robustness (fault tolerance) in teams.
Generalizing the argument above, teamwork relies centerally on the concept of mutual awareness. Mutual
awareness involves not just knowledge of shared goals, but other static information too, like the structure of
the team (e.g. who is playing what role) and what the mission objective and plan for achieving it is, as well
as transient information, such as current task assignments, achievement status of intermediate goals (for
maintaining coordination), dynamic beliefs about the environment relevant to decision points, what the
situation is, resource availability, and so on. To operate effectively, a team must maintain on on-going
dialogue to consistently exchange this information, reconcile inconsistencies, and develop a “common
picture.” This mutual awareness is often described as a “shared mental model” (Rouse et al., 1992; Cannon-
Bowers et al., 1993) in the team psychology literature, and fostering the development and acquisition of a
shared mental model among team members is the target of specific training methods such as cross training
(Volpe et al., 1996; Blickensderfer et al., 1998; Cannon-Bowers et al., 1998).
These aspects that drive teamwork have important consequences for behavior that makes teams and their
performance more than just the sum of their parts. Beyond synergy and coordination, teams can also
generate novel solutions to problems that individuals might not be able to alone. Through internal activities
such as load-balancing, teams can flexibly respond to changes in the environment. In fact, adaptiveness
(including re-allocation of resources as necessary, and even re-configuration of the team structure) is often
taken as a sign of the most effective teams (Klein and Pierce, 2001). Hence these behaviors are important to
try to simulate to get the most realistic performance out of synthetic teams in constructive simulations.
However, accomplishing this relies a great deal on communication among the team members. They must
communicate to distribute or assign tasks, update status, seek help, and maintain coordination. Furthermore,
communication is needed to exchange information and make decisions. Effective teams combine
information from multiple sources of information distributed across multiple sensors to synthesize a
common operational picture and assessment of the situation, allowing an appropriate, coordinated response.
This is what constitutes the “team mind,” a hypothetical cognitive construct that emerges from the team and
makes its behaviors appear as if they were under centralized, unified control (Klein, 1999).
Teams are of course found in many applications domains in addition to military combat. Examples include
sports (football, soccer), chess, fire-fighting, urban crisis management and emergency response, hospital
care (nursing, ICUs), business, manufacturing, aviation (air-traffic control, cockpit crews), etc.
Relationship between Teamwork and Command-and-Control
Teamwork is often associated with command-and-control (C2). Historically, C2 has been seen as a
hierarchical process of commanders directing their subordinates on the battlefield (though generalized
command-and-control also has many non-military applications as well). However, more recently there has
been an increasing appreciation of the distributed nature of information collection, often done by a staff in
communication with various Recon elements in the field that supports decision-making. Often decisions
must be coordinated laterally between multple adjacent units involved, and occasionally there is a need to
push decisions further down to smaller units closer to the battle, who have a better sense of tactical
opportunities and consequences of actions. Hierarchichal command is now even viewed by some as
inflexible and sub-optimal. It was previously necessary for maintaining control in chaotic environments, but
is no longer so clearly necessary with the advent of more powerful C3 networks and information technology,
enabling instantaneous consultation and coordination over a distance. See further discussion in the report
“The Command Post is Not a Place” (Gorman, 1980).
Command-and-control is a complex topic in its own right (Drillings and Serfaty, 1997). In a military
context, C2 can be defined as the control of (spatially) distributed assets (weapons and sensors) in the most
effective way to achieve tactical goals, which in the case of ground combat involves containing, attacking,
defending, clearing, or denying enemy access to areas of 2D terrain (including assets on it, such as towns,
airstrips, communication towers, ports, etc.)
Models of C2 typically decompose the tactical decision-making process into two major phases: situation
assessment, and then development and execution of a suitable response. This decomposition reflects the
Naturalistic Decision-Making (NDM) paradigm (Zsambok and Klein, 1997), in which the decision process
is simplified to distinguishing one of a finite number of general situation types (such being flanked,
enveloped, bypassed, etc.), for which a prototypical response can be applied. Though not necessarily
optimal, this approach avoids having to generate novel responses from scratch by planning from first-
principles; the situations represent cases the commander is familiar with and can draw appropriate responses
from training or experience.
One of the best known NDM models of C2 is the Recognition-Primed Decision-Making (RPD) model
(Klein, 1993, 1997). According to this model, the C2 process consists of a series of stages, beginning with:
1) information gathering and situation assessment
2) detection or identification of the situation as one a small number of expected “types”
3) proposal of a solution (some appropriate response drawn from experience or practice)
4) evaluation and refinement of the solution by projection of consequences (how the situation is
expected to develop) and events into the near future (via “mental simulation”)
5) execution of the response and continued monitoring of the situation to ensure it proceeds as
While the most tangible and visible aspects of C2 are the decisive actions taken in response, the
successfulness of the C2 operation relies heavily on the situation assessment process that preceeds the
recognition of the situation and decision on a response. Correctly and quickly identifying the situation is of
utmost important to the outcome. Therefore, most research has focused on the first aspect of the tactical
decision-making process. Situation assessment involves information gathering and uncertainty reduction
(Schmitt and Klein, 1996). These activities are prominent in the early phases of C2. Endsley (1995)
characterizes the generation of situation awareness as a process consisting of three incremental stages,
starting with perception of factual information about the environment, moving toward comprehension of the
situation as a whole (interpretation of pattens and causes), and finally appreciation of the consequences of
the situation, including projection of future events and impact on one’s goals.
The RPD model of C2 makes a commitment to modeling situation assessment specifically as a feature-
matching process (Klein, 1993). It is claimed that situations are represented by lists of features or cues
associated with them, and that commanders actively look for these features in the environment. The features
may have different weights based on relevance to various situations. Once a sufficient number of features
has been detected for one of the situations, the RPD model predicts that the commander commits to the
identification and triggers the process of developing a response based on it. This satisficing approach,
characteristic of the NDM paradigm, contrasts with use of more precise probabilistic models (e.g. Bayesian)
or in-depth evaluation of alternatives, but is supported by many studies of human tactical decision-making,
especially under constraints of time pressure.
There is a great deal of evidence for this NDM/RPD style of C2 on the battlefield. Pascual and Henderson
(1997) collected and coded communications from two live exercises, and found the most support (based on
type of message) for RPD among 7 other models, especially under high workload. Serfaty et al. (1997)
studied the role of expertise in C2, which also supports RPD because experience forms the basis of the
cases/situations and diversity/quality of responses to choose from. Adelman et al. (1998) summarize
research on tactical decision-making in the context of the brigade tactical operations center (TOC), and
describe how it fits the RPD model.
Extending these theories of C2 to teams involves distributing the tactical decision-making process over
multiple individuals, such as a command staff. Often the final decision, responsibility, and authority lies
with a single commander who approves the decision. However, prior to this, the staff actively engages in
distributed situation assessment, collecting information from multiple sources and integrating it together to
form a common picture. Salas, Prince, Baker, and Shrestha (1995) provide a good overview of the
distributed nature of team situation awareness and the communicaion required. The challenge is to work
together to pool various sources and perform information fusion to lift the “fog of war,” identify enemy
intent, and so on. They are jointly committed to the common goal of determining the situation accurately,
and they share information and collaborate accordingly. A report by Sonnenwald and Pierce (1998)
discusses the organization of the battalion TOC to better support this collaborative process.
A significant advance in the study of teams came from the identification and characterization of team
competencies (Cannon-Bowers, Tannenbaum, Salas, and Volpe, 1995). Team competencies are those
requirements that are needed for effective team performance. Team competencies can be divided into:
knowledge, skills, and attitudes. Knowledge refers to factual information about the domain, mission, and
team structure that team members must know in order to interact effectively. For example, they need to
know who plays what role, and what the capabilities of their teammates are. Skills refer to the teamwork
processes, such as information exchange, load balancing, and conflict resolution. And attitudes refer to the
motivational determinants of team members’ choices, such as orientation toward teamwork, leadership, and
willingness to accept advice or help.
All three of these areas of competency can be targets for training. Furthermore, all three areas are important
for human behavior representation, especially in synthetic teams. For example, the affects of fatigue,
attrition, or uncertainty on morale can impact team cohesion and performance, and these can be understood
as primarily attitudinal effects. Whereas confusion over how to re-organize after the loss of a commander,
and inefficiency at determining how to continue the mission with modified role assignments, can be related
to lack of knowledge competency (especially acute in units with high turnover and young recruits, though
mitigated by having more experienced members on the team).
The decomposition of teamwork requirements into specific competencies (knowledge, skills, and attitudes)
begins to open the door for understanding the relationship between human performance, as determined by
cognition at the individual level, and team performance as a whole. Huey and Wickens (1985) present an in-
depth dicussion of these issues in the context of tank crews, as a summary of an NRC-sponsored research
panel. Just as performance of certain individual tasks (operating equipment, monitoring communication
channels, etc.) places demands on cognition (memory, reasoning, attention), so does teamwork. In fact,
teamwork can be thought of as an additional activity that competes with one’s own taskwork for cognitive
resources. Interacting with team members requires attention and effort. In fact, it can be predicted that: 1)
high workload of individual tasks should interfere with teamwork behaviors, such as reducing
communications and synchronization, and 2) this inter-relationship could be influenced by training,
especially through automation of taskwork that frees up cognitive resources for attending to teamwork. This
intrinsic linkage between individual performance and team performance, mediated through cognition, can
serve as the foundation for studies and simulation of many interesting phenomena in the behavior of human
To better understand how teams work, researchers often make a distinction between taskwork and teamwork
(Salas et al., 1992). Taskwork refers to activities individuals do in the course of performing their own parts
of the team’s mission, more or less independently from others. Team members must of course train for
these activities as a pre-requisite to working in the team. However, teamwork refers to those activities
explicitly oriented toward interactions among team members and are required for ensuring the collective
success. Teamwork processes include: communication, synchronization, load balancing, consensus
formation, conflict resolution, monitoring and critiquing, confirming, and even interpersonal interactions
such as reassurance. It is argued that these activities must be practiced as well to produce a truly effective
team. It is an unfortunate reality that most training in industry and the military focuses on training
individuals for taskwork (such as acquiring knowledge of individual procedures in a cockpit), while
relegating teaching of teamwork to on-the-job training (e.g. indoctrination by peers) in the operational
Because of the importance of the taskwork/teamwork distinction, researchers who study team training have
developed a number of empirical measures to assess the internal processes of teams and correlate them with
external measures of performance (Cannon-Bowers and Salas, 1997). Two ways of assessing teams are
through team outcome measures and team process measures. Team outcome measures are direct measures
of performance, such as time to complete the mission, number goals achieved, and number of resources
used. Increasing these measures is usually the direct objective of training. However, in order to evaluate a
team and explain why their performance is not optimal, and to give them feedback on how to improve
themselves, team process measures are needed. There has been a great deal of interest in defining
quantifiable team process measures, such as frequency of communications, types of communications,
questioning of decisions, sharing of information, requests for help, and so on. As a specific example,
Serfaty et al. (1998) define several “anticipation ratios,” which quantify the frequency with which team
members actively provide useful information to others versus having to be asked for it (i.e. transfers versus
requests). Improving these internal aspects is only an indirect way of improving a team. However, to the
extent that these process measures are correlated with outcome measures, they can be used to identify
weaknesses in a team and to design targeted training methods that should eventually improve the team’s
overall performance. In simulations, team process measures can be used to gauge how realistic the
performance of a synthetic team (e.g. of agents) is and whether they are achieving their outcomes in a way
that reflects how a team of humans would.
Perhaps the three most central team processes, which have received the most attention from researchers, are:
communication, adaptiveness, and decision-making. All are essential for team performance.
Communication among team members can serve a number of different purposes, including coordination of
team activities (synchronization), information exchange (especially building situation awareness), and to
support other teamwork processes (load-balancing, requests for help, decision-making,
feedback/monitoring/self-correction, etc.). Some studies of communications in various operational settings
have cataloged the types of messages exchanged along various dimensions, such as task-oriented versus
team-oriented, behavioral versus cognitive, etc. (Gordon et al., 2001). The effect of various factors on both
the frequency and types of communications can be assessed. For example, highly-effective teams tend to
communicate more, and they tend to talk more about teamwork than taskwork (Orasanu, 1990).
Interestingly, however, it has been observed that under particularly high workload (or high tempo
operations), communication in the most effective teams can actually decrease, presumably because team
members begin to rely more on implicit coordination through well-developed shared mental models
(Serfaty, Entin, and Johnston, 1998). Schraagen and Rasker (2001) have followed up on this work by
distinguishing between exchange of team information versus situation information, which are found to differ
when handling novel versus routine situations.
Adaptiveness is also an important competency for effective teamwork (Kozlowski, 1998; Klein and Pierce,
2001). There are several notions of adaptiveness in teams. On the one hand, it can refer to responses such
as load-balancing or task re-allocation, as well as resource re-distribution. As team members become over-
loaded, they might seek to offload some of their taskwork on less loaded cohorts (Porter et al., 2002).
Another notion of adaptation is shifting among strategies or re-planning (see discussion in Klein and Pierce,
2001). In general, members on effective teams must learn to recognize the cues of excessive workload and
opportunities for initiating these re-balancing or re-planning activities, they must have basic knowledge of
the capabilities of their teammates (a knowledge competency), and they must practice minimizing the
overhead of the transfer. Serfaty, Entin, and Johnston (1998) describe the negative effects of stress on this
type of team adaptiveness, and suggest ways to train to mitigate this. On the other hand, some situations call
for a more extreme type of adaptiveness in which team members actually re-define their roles (ref?).
Though more drastic, team re-organization also requires similar compentencies for recognition, knowledge
of roles, and efficiency of process. Regardless of type of adaptation, special emphasis is often placed on
developing meta-cognitive skills within the team for self-monitoring performance to identify when and how
to adapt (Kozlowski, 1998; Cohen, Freeman, and Wolf, 1996).
Finally, decision-making is a process most closely associated with the ultimate goal of most teams (Orasanu
and Salas, 1993). Of course decision-making strongly relies on communication (Ilgen et al., 1995). There
have been many studies on how teams make (distributed) decisions. One of the advantages of teams is that
expertise does not have to be centralized. However, the team members must work together to derive a
consistent opinion, i.e. consensus. Perhaps the most basic mechanism is by voting. Bayesian methods are
probably more correct, but not often used by humans. Kleinman et al. (1992) describe normative models of
how distributed teams combine evidence, and how this is enhanced by a hierarchical structure (with a team
leader). However, consensus formation is highly influenced by inter-personal factors such as trust and
assertiveness. Consensus formation also depends on the relative expertise of the individuals, and team
members often modify their opinions as they see the votes of others (Sorkin, 2001).
There exist a number of mathematical models of teams that can be used to make predictions of team
performance. Sorkin’s (2001) model of consensus formation is based on signal processing theory.
Kleinman et al. (1992) describe a model of team resource allocation called TDS that is based on solving a
weighted bipartite-graph matching problem. In a separate model called DREAM, Kleinman et al. (1992)
also describe how to reformulate a team task-allocation problem as an optimization problem that can be
solved using dynamic programming methods. Coovert has popularized the use of Petri Nets to model the
concurrent activities within a team and their inter-dependencies (Coovert and McNellis, 1992; Coovert,
Craiger, and Cannon-Bowers, 1995). Of course, prediction of team performance also depends on a wide
variety of other factors as well, such as individual competencies, diversity of expertise on the team,
individual workload (as might be measured by the TLX task-load index developed by NASA; Hart and
Staveland, 1988), and the degree of inter-dependency required to handle a particular situation.
Developments in the Understanding of Teamwork through Studies of
Air Combat, and What We Can Learn From Team Training Applications
Much of the research on teamwork and tactical decision-making has been done in the context of air combat
and anti-air warfare. This includes both offensive operations, such as the control of strike fighters in enemy
airspace by an AWACS (airborne warning and control system), as well as defensive operations such as the
protection of battleships from potential incoming threats (missles and aircraft). Within this air-warfare
context, many studies have been done on how teams interact to perform their tasks, how they respond to
stress, etc. Several cognitive task analysis of AWACS weapons director teams are available (Fahey et al.,
1997; MacMillan et al., 1998; Schiflett et al., 2000). Also, there have been several studies that have
examined team naturalistic decision-making processes in air defense (AD/AAW) teams in the Combat
Information Center (CIC) on ships such as the Aegis battlecruiser (Kaempf, et al. 1996; Zachary et al., 1998;
Leibhaber and Smith, 2000). These studies generally support the view that C2 teams in this domain are
carrying out a distributed, recognitional process that is focused on gathering and fusing information to
produce team situational awareness. Similar behavior can be predicted for command groups and battle staff
teams in ground combat, who face analogous challenges of uncertainty about spatially distributed and
Much of the research in this area throughout the decade of the 1990’s was sponsored by the TADMUS
program (Tactical Decision-Making Under Stress; Collyer and Makecki, 1998) through the Naval Air
Warfare Center, which was a congressionally-mandated effort to study and develop improvements to the
team training process in response to the accidental downing of a commercial Iranian Airbus by the USS
Vincennes in 1989. The accident was attributed to a break down in teamwork and group decision-making in
a high-stress environment. Many aspects of teamwork were studied in this context, including effects of
stress, leadership, communication, adaptiveness, monitoring and self-correction, etc., and recommendations
were made for development of new training methods to enhance team effectiveness. A good example is the
TACT training method (Serfaty, Entin, Johnston, 1998), which was designed to get team members to adapt
more effectively to changing workloads under stress through practicing scenarios that reinforce the use of
shared mental models for implicit coordination.
To place all of this on a rigorous basis, a great deal of effort has gone into defining empirical measures of
team performance in order to assess teams, identify deficiencies, provide informative feedback, and design
customized interventions that address specific weaknesses (Cannon-Bowers and Salas, 1997; Johnston et al.,
1997). One example is the ATOM Anti-Air Teamwork Observation Measure (Smith-Jensch et al., 1998),
which is a rating system that team evaluators can use to assess teams. It includes dimensions such as:
o seeking information from all sources
o passing to appropriate persons before being asked
o providing “big picture” situation updates
o using proper phraseology
o providing complete internal and external reports
o avoiding excess chatter
o ensuring communications are audible and ungarbled
o correcting team errors
o providing and requesting backup assistance when needed
o providing guidance or suggestions to team members
o stating clear team and individual priorities
While specific actions and events in an AAW scenario can be linked into this hierarchy, it clearly
generalizes to many other domains as well. Furthermore, besides its application to assessment and training,
it reveals the important internal determinants that affect the outward behavior and performance of a team.
Therefore, simulations and/or models of teams need to take these processes into account to generate realistic
Various air-warfare simulations have been developed as tools both for modeling team performance and for
implementing and experimenting with novel simulation-based training methodologies (Johnston, Poirier,
and Smith-Jensch, 1998). Perhaps the most widely known and used simulation is DDD (Dynamic
Distributed Decision-Making; Kleinman, Young, and Higgins, 1996), which can be used to simulate a
variety of teamwork domains (especially those involving use of workstations with a scope or map and
moving threats or targets) and has a number of built-in process measures to facilitate team research. DDD
has been used for a broad range of teamwork research studies (Ellis et al., 2001; Entin, 2001), as well as real
exercises in distributed mission training (Coovert et al., 2001).
Simulating Team Behavior with Multi-Agent Systems
Recent advances in intelligent agent research have opened up possibilities for more sophisticated simulations
of teamwork and cooperative behavior. Agent models of teamwork are based on key concepts such as joint
intentions (Tambe, 1997; Cohen and Levesque, 1991) and shared plans (Kraus and Grosz, 1996), which
formally encode how teams do things together. These concepts are derived from the BDI framework (Rao
and Georgeff, 1995), which postulates the importance of representing and reasoning about mental states
such as beliefs, desires, and intentions when interacting with other agents. Jennings’ (1995) GRATE system
exemplifies how useful BDI concepts (especially joint responsibilities) can be to producing complex
coordinated behaviors (the main application of GRATE is a distributed industrial manufacturing and
distribution system). Another popular environment for developing and evaluating models of agent
teamwork is robotic soccer (Kitano et al., 1997); a number of new methods for communication,
coordination, and planning have been developed for synthetic-soccer competitions (Stone and Veloso,
Perhaps the most widely known agent-based teamwork system is STEAM (Tambe, 1997). STEAM is multi-
agent system built on top of SOAR, a production-system-based agent architecture, to which it adds rules for
establishing and maintaining commitments to joint intentions. STEAM produces robust behaviors even in
unanticipated situations by automatically generating communications among team members to reconcile
beliefs about achievability of goals and to re-assign tasks. For example, this was illustrated in the behavior
of a simulated company of Army attack helicopters in a situation where the lead aircraft gets shot down;
with STEAM, the company was able to re-group and continue with the mission. STEAM is also used in
TacAirSoar (Jones et al., 1999), which is a module that can be used to control aircraft and produce tactical
behavior in distributed simulations of air combat missions
Other multi-agent systems that employ some form of teamwork include RETSINA (Paolucci et al., 1999),
SWARMM (Tidhar, 1998), and CAST (Yen et al., 2001). All of these have been applied to military combat
simulations. In RETSINA, agents work to support humans by gathering information or constructing plans
that will achieve goals in a combat environment. The agents’ activities are fairly de-coupled, each working
more or less independently on separate parts of a task; opportunities for helping each other are discovered
through a “match-making” intermediary. RETSINA has been incorporated into the CoABS grid
(http://coabs.globalinfotek.com). SWARMM was specifically designed as a system for simulating air
combat teams. It breaks teams of fighters down into well-defined roles, such as lead aircraft (commander)
and wingman, which determines each team members’ actions in a plan (mission or maneuver). In CAST,
more general role assignment is permitted through a flexible language for team structure and process
description. The agents decide dynamically during a scenario who is the most appropriate member to carry
out a task among several that can play the role, and the others then automatically play backup. CAST also
uses the description of the team as a rudimentary form of a shared mental model to automatically infer
information exchange opportunities and derive information flow based on analysis of needs of teammates.
For purposes of generating team behavior, agent-based models require the usual types of inputs from a
simulation environment: information on the state of the scenario as it evolves, such as coordinates of known
enemy positions, events such as explosions or equipment failures, discovery of obstacles such as mine fields
or destroyed bridges, weather conditions, etc. Of course, agents should only have access to fair information
that humans would have, such as spot reports or sensor readings, and should not be allowed to take
advantage of knowing “ground truth.” This forces the agents to be active in collecting and updating
information, as a human team would. From this filtered information, teams of agents can work together to
make decisions. To support this, it is essential for the agents to have a mechanism for communicating with
each other, which mediates their interactions and teamwork processes. In some cases, agents may
communicate directly with each other using arbitrary software protocols for message-passing (e.g. sockets,
TCP/IP). However, in other cases, it may be interesting to restrict agents to use (simulations of) only real-
world communication mechanisms, such as radio networks or battlefield LANS, so the impact of delayed or
degraded messages or interruptions in communications networks on team performance can be explored.
Although great progress has been made in simulating coordinated behavior, multi-agent systems still have a
long way to go to produce the full range of behaviors exhibited by human teams. For example, none of the
existing systems explicitly attempts to model situation awareness, which is widely recognized to be a
primary driver of behavior in human teams, especially for information gathering activities. Furthermore,
they do not follow an NDM process, such as looking for features and making satisficing decisions. Perhaps
this is because researchers in multi-agent systems do not feel obliged to respect the constraints of human
cognition, given that agents can act much more rapidly and precisely without limits on memory, accuracy, or
attention. However, for realistic human behavior representation of teams, it would be important to take into
account things such as biases in decision-making, or effects of fatigue or stress. In particular, an accurate
model of the situation assessment process is especially needed for generating realistic information flow and
communication within the team. Agent researchers are often not concerned about the faithfulness of the
internal team process, as long as the external behavior or performance is adequate (perhaps even better than
human teams). But for human-behavior researchers, the interactions within the team and the way they do
things are just as important to model.
Teamwork is a central feature of many activities in the modern military. Accurate models of teamwork,
including distributed decision making and information flow, are needed for developing and evaluating new
equipment and procedures through human-behavior representation (HBR) studies. In this paper, we have
surveyed recent literature on the structure of teams and mechanisms of teamwork. Teams are viewed as
groups of inter-dependent individuals working together to accomplish a common goal. Effective teamwork
requires a number of competencies, which can be trained. However, the key insight is that team members
must possess a mutual awareness (shared mental model), which enables them to interact, anticipate each
other’s actions and needs, and carry out team processes like communication, coordination, and helping/back-
up. These processes underlie more advanced teamwork activities, such as distributed situation awareness
and command and control, of particular relevance to the military. It is important to capture the realistic
aspects of human teams for HBR studies, such as the effects of workload on communication or coordination,
or reaction to time-pressures and stress. While intelligent agents have a great potential for modeling
teamwork in HBR simulations, much work remains to be done to accurately represent cognitive aspects of
human team members, like making satisficing decisions, heuristics for dealing with uncertainty, biases, and
workload limitations, and the effects of these cognitive aspects on team interactions in real, human teams.
Adelman, L., Leedom, D.K., Murphy, J., and Killam, B. (1998). Technical Report: Description of Brigade
C2 Decision Process. Army Research Lab, Aberdeen Proving Grounds, MD.
Blickensderfer, E., Cannon-Bowers, J.A., and Salas, E. (1998). Cross-Training and Team Performance. in
Making Decisions Under Stress. J.A. Cannon-Bowers and E. Salas (eds.). American Psychological
Association: Washington, DC. pp. 299-311.
Cannon-Bowers, J.A., Salas, E., and Converse, S.A. (1993). Shared Mental Models in Expert Team Decision
Making. in: Individual and Group Decision Making: Current Issues. N.J. Castellan, Jr. (ed.). Erlbaum:
Hillsdale, NJ. pp. 221-246.
Cannon-Bowers, J.A., Tannenbaum, S.I., Salas, E., and Volpe, C.E. (1995). Defining Competencies and
Establishing Team Training Requirements. in Team Effectiveness and Decision Making in Organization.
R.A. Guzzo and E. Salas (eds.). Jossey-Bass Publishers: San Francisco. pp. 333-380.
Cannon-Bowers, J.A., Salas, E., Blickensderfer, E.L., and Bowers, C.A. (1998). The Impact of Cross-
Training and Workload on Team Functioning: A Replication and Extension of Initial Findings. Human
Cannon-Bowers, J.A. and Salas, E. (1997). A Framework for Developing Team Performance Measures in
Training. in: Team Performance Assessment and Measurement: Theory, Methods, and Applications. M.T.
Brannick, E. Salas, and C. Prince (eds). Erlbaum:Hilsdale, NJ. pp. 45-62.
Cohen, M.S. Freeman, J.T. and Wolf, S. (1996). Metacognition in Time-Stressed Decion Making:
Recognizing, Critiquing, and Correction. Human Factors, 38:206-219.
Cohen, P.R. and Levesque, H.J. (1991). Teamwork. Nous, 25:487-512.
Collyer, S.C. and Makecki, G.S. (1998). Tactical Decision Making Under Stress: History and Overview. in:
Making Decisions Under Stress. J.A. Cannon-Bowers and E. Salas (eds.). American Psychological
Association: Washington, DC. pp. 3-15.
Coovert, M.D., Craiger, J.P., and Cannon-Bowers, J.A. (1995). Innovations in Modeling and Simulating
Team Performance: Implications for Decision Making. in Team Effectiveness and Decision Making in
Organization. R.A. Guzzo and E. Salas (eds.). Jossey-Bass Publishers: San Francisco. pp. 149-203.
Coovert, M.D. and McNellis, K. (1992). Team Decision Making and Performance: A Review and Proposed
Modeling Approach Employing Petri Nets. in Teams: Their Training and Performance. R.W. Swezey and
E. Salas (eds.). Ablex: Norwood, NJ. pp. 247-280.
Coovert, M.D., Riddle, D., Ho, P., Miles, D.E., and Gordon, T.R. (2001). Measurement and Feedback
Strategies for Distributed Team Training Using the Internet. Proceedings of the 45th Annual Meeting of the
Human Factors and Ergonomics Society.
Drillings, M. and Serfaty, D. (1997). Naturalistic Decision Making in Command and Control in:
Naturalistic Decision Making. C.E. Zsambok and G. Klein (eds.). Erlbaum: Mahwah, NJ. pp. 71-80.
Ellis, A., Hollenbeck, J.R., Ilgen, D.R., Porter, C., West, B.J., and Moon, H. (2001). Capacity,
Collaboration, and Commonality: A Framework for Understanding Team Learning. in Proceedings of the
Sixth International Command and Control Research and Technology Symposium.
Entin, E. (2001). The Effects of Leader Role and Task Load on Team Performance and Process. in:
Proceedings of the Sixth International Command and Control Research Technology Symposium.
Endsley, M. (1995). Toward a Theory of Situation Awareness in Dynamic Situations. Human Factors,
Fahey, R.P., Rowe, A.L., Dunlap, K.L., and deBoom, D.O. (1997). Synthetic Task Design (1): Preliminary
Cognitive Task Analysis of AWACS Weapons Director Teams. Technical report. Air Force Research Lab,
Brooks Air Force Base, San Antonio, TX.
Grosz, B. and Kraus, S. (1996). Collaborative Plans for Complex Group Action. Artificial Intelligence,
Gordon, T.R., Coovert, M.D., Riddle, D.L., Miles, D.E., Hoffman, K.A., King, T.S., Elliot, L.R., Schiflett,
S.G. and Chaiken, S. (2001). Classifying C2 Decision Making Jobs Using Cognitive Task Analyses and
Verbal Protocol Analysis. Proceedings of the Sixth International Command and Control Research
Gorman, P. (1980). The Command Post is Not a Place. http://www.ida.org/DIVISIONS/sctr/cpof/
Hart, S.G. and Staveland, L. (1988). Development of NASA-TLX (Task Load Index): Results of Empirical
and Theoretical Research. in: Human Mental Workload. P.A. Hancock and Meshkati (eds.). Elsevier:
Amsterdam. pp. 139-183.
Huey, B.M. and Wickens, C.D. (1993). Workload Transition: Implications for Individual and Team
Performance. National Academy Press: Washington, DC.
Ilgen, D., Major, D.A., Hollenbeck, J.R., and Sego, D.J. (1995). Raising An Individual Decision-Making
Model to the Team Level: A New Research Model and Paradigm. in Team Effectiveness and Decision
Making in Organization. R.A. Guzzo and E. Salas (eds.). Jossey-Bass Publishers: San Francisco. pp. 113-
Jennings, N.R. (1996). Contolling Cooperative Problem Solving in Industrial Multi-Agent Systems Using
Joint Intentions. Artificial Intelligence, 75:195-240.
Johnston, J.H., Poirier, J., and Smith-Jensch, K.A. (1998). Decision Making Under Stress: Creating a
Research Methodology. in: Making Decisions Under Stress. J.A. Cannon-Bowers and E. Salas (eds.).
American Psychological Association: Washington, DC. pp. 39-59.
Jones, R.M., Laird, J.E., Nielsen, P.E., Coulter, K.J., Kenny, P., and Koss, F.V. (1999). Automated
Intelligent Pilots for Combat Flight Simulation. AI Magazine, 20(1):27-41.
Kaempf, G.L., Klein, G., Thorsden, M.L., and Wolf, S. (1996). Decision Making in Complex Naval
Command-and-Control Environments. Human Factors, 28:220-231.
Kitano, H., Kuniyoshi, Y., Noda, I., Asada, M., Matsubara, H. and Osawa, E. (1997). RoboCup: A
Challenge Problem for AI. AI Magazine, 18(1):73-85.
Klein, G. (1993). A Recognition-Primed Decision (RPD) Model of Rapid Decision Making. in: Decision
Making in Action: Models and Methods. G.A. Klein, J. Orasanu, R. Calderwood, and C.E. Zsambok (eds).
Ablex: Norwood, NJ. pp. 138-147.
Klein, G. (1997). The Recognition-Primed Decision (RPD) Model: Looking Back, Looking Forward. in:
Naturalistic Decision Making. C.E. Zsambok and G. Klein (eds.). Erlbaum: Mahwah, NJ. pp. 285-292.
Klein, G. (1999). Sources of Power: How People Make Decisions. MIT Press.
Kozlowski, S.W.J. (1998). Training and Developing Adaptive Teams: Theory, Principles, and Research. in:
Making Decisions Under Stress. J.A. Cannon-Bowers and E. Salas (eds.). American Psychological
Association: Washington, DC. pp. 115-153.
Klein, G. and Pierce, L. (2001). Adaptive Teams: in Proceedings of the 6th International Command and
Control Research and Technology Symposium.
Kleinman, D.L., Luh, P.B., Patipati, K.R., and Serfaty, D. (1992). Mathematical Models of Team
Performance: A Distributed Decision-Making Approach. in Teams: Their Training and Performance. R.W.
Swezey and E. Salas (eds.). Ablex: Norwood, NJ. pp. 177-217.
Kleinman, D.L., Young, P.W., and Higgins, G. (1996). The DDD-III: A Tool for Empirical Research in
Adaptive Organizations. in Proceedings of the 1996 Command and Control Research and Technology
Leibhaber, L.J. and Smith C.A.P. (2000). Naval Air Defense Threat Assessment: Cognitive Factors and
Model. Proceedings of the International Command and Control Research and Technology Symposium.
MacMillan, J., Serfaty, D., Young, P., Klinger, D., Thorsden, M., and Cohen, M. (1998). A System to
Enhance Team Decision-Making Performance: Phase 1 Final Report. Technical report AP-R-1102. Air
Force Research Lab, Brooks Air Force Base, San Antonia, TX.
Orasanu, J.M. (1990). Shared Mental Models and Crew Decision Making. Technical report 46. Cognitive
Science Lab, Princeton University. Princeton, NJ.
Orasanu, J.M. and Salas, E. (1993). Team Decision Making in Complex Environments. in: Decision Making
in Action: Models and Methods. G.A. Klein, J. Orasanu, R. Calderwood, and C.E. Zsambok (eds). Ablex:
Norwood, NJ. pp. 327-345.
Paolucci, M., Kalp, D., Pannu, A., Shehory, O., and Sycara, K. (1999). A Planning Component for
RETSINA Agents. Proceedings of the Sixth International Workshop on Agent Theories, Architectures, and
Pascual, R. and Henderson, S. (1997). Evidence of Naturalistic Decision Making in Military Command and
Control. in: Naturalistic Decision Making. C.E. Zsambok and G. Klein (eds.). Erlbaum: Mahwah, NJ. pp.
Rao, A.S. and Georgeff, M.P. (1995). BDI Agents: From Theory to Practice. Proceedings of the First
International Conference on Multi-Agent Systems, 312-319.
Rouse, W.B., Cannon-Bowers, J.A., and Salas, E. (1992). The Role of Mental Models in Team Performance
in Complex Systems. IEEE Transactions on Systems, Man, and Cybernetics, 22:1296-1308.
Salas, E., Dickinson, T.L., Tannenbaum, S.I., and Converse, S.A. (1992). Toward and Understanding of
Team Performance and Training. in: Teams, Their Training and Performance. R.W. Swezey and E. Salas
(eds.). Ablex: Norwood, NJ. pp. 3-29.
Salas, E., Prince, C. Baker, D.P., adn Shrestha, L. (1995). Situation Awareness in Team Performance:
Implications for Measurement and Training. Human Factors, 37(1):123-136.
Schiflett, S.G., Elliott, L.R., Dalrymple, M., Tessier, P.A., and Cardenas, R. (2000). Assessment of
Command and Control Team Performance in Distributed Mission Training Exercises. Technical report, Air
Force Research Lab, Brooks Air Force Base, San Antonio, TX.
Schmitt, J.F. and Klein, G.A. (1996). Fighting in the Fog: Dealing with Battlefield Uncertainty. Marine
Corps Gazette, 80:62-29.
Schraagen, J.M. and Rasker, P.C. (2001). Communication in Command and Control Teams. Proceedings of
the Sixth International Command and Control Research and Technology Symposium.
Serfaty, D., Entin, E.E., and Johnston, J.H. (1998). Team Coordination Training. in: Making Decisions
Under Stress. J.A. Cannon-Bowers and E. Salas (eds.). American Psychological Association: Washington,
DC. pp. 221-245.
Serfaty, D., MacMillan, J., Entin, E.E., and Entin, E.B. (1997). The Decision-Making Expertise of Battle
Commanders. in: Naturalistic Decision Making. C.E. Zsambok and G. Klein (eds.). Erlbaum: Mahwah, NJ.
Smith-Jensch, K.A., Johnston, J.H., and Payne, S. (1998). Measuring Team-Related Expertise in Complex
Environments. in: Making Decisions Under Stress. J.A. Cannon-Bowers and E. Salas (eds.). American
Psychological Association: Washington, DC. pp. 61-87.
Snook, S.A. (2002). Friendly Fire: The Accidental Shootdown of US Black Hawks. Princeton University
Sorkin, R.D., Hays, C.J., and West R. (2001). Signal-Detection Analysis of Group Decision Making.
Psychological Review, 108:183-203.
Sonnenwald, D.H. and Pierce, L.G. (1998). Optimizing Collaboration in Battalion Staff Elements. Technical
Report ARL-CR-435. Army Research Lab, Aberdeen Proving Ground, MD.
Stone, P. and Veloso, M. (1999). Task Decomposition and Dynamic Role Assignment for Real-Time
Strategic Teamwork. in Proceedings of the Fifth International Workshop on Agent Theories, Architectures,
and Languages, pp. 293-308.
Tambe, M. (1995). Recursive Agent and Agent-Group Tracking in a Real-Time Dynamic Environment.
Proceedings of the First International Conference on Multi-Agent Systems, 368-375.
Tambe, M. (1997). Towards Flexible Teamwork. Journal of Artificial Intelligence Research, 7:83-124.
Tidhar, G., Heinze, C., and Selvestrel, M.C. (1998). Flying Together: Modeling Air Mission Teams. Applied
Volpe, C.E., Cannon-Bowers, J.A., Salas, E., and Spector, P. (1996). The Impact of Cross-Training on
Team Functioning: An Empirical Investigation. Human Factors, 38:87-100.
Zsambok, C.E. and Klein, G. (1997). Naturalistic Decision Making. Lawrence Erlbaum: Mahwah, NJ.
Zachary, W.W., Ryder, J.M., and Hicinbothom, J.H. (1998). Cognitive Task Analysis and Modeling of
Decision Making in Complex Environments. in: Making Decisions Under Stress. J.A. Cannon-Bowers and
E. Salas (eds.). American Psychological Association: Washington, DC. pp. 315-344.