; NASA Teambuilding.doc
Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

NASA Teambuilding.doc

VIEWS: 3 PAGES: 11

  • pg 1
									    Requirements for Building Technical Teams
                        By Charlie Pellerin, author of
              How NASA Builds Teams (Wiley, 2009)
This paper describes lessons and results from working with over 1000 NASA
project, engineering and management teams from spring 2003 to the present. Dr.
Ed Hoffman, the Director of NASA’s Academy of Program/Project and
Leadership Excellence (“APPEL”) funded and guided this activity. NASA formed
the APPEL after the Challenger explosion to prevent future space accidents. Ed
focused on our processes after back-to-back Mars mission failures, responding
to strong direction from NASA Administrator, Dan Goldin.

Why NASA Builds Teams
Sophisticated Review Boards investigate space failures like Challenger’s
explosion, Hubble’s ’flawed mirror, and Columbia’s disintegration very
thoroughly. Our country spares no expense in finding the root cause of these
tragic events. In every case, these investigations named “social factors” as the
ultimate causes, not the obvious technical errors.

For example, Diane Vaughan (The Challenger Launch Decision, 1996) named
“normalization of deviance” as a root cause of the Shuttle’s explosion. She noted
that delaying a Shuttle launch required a much stronger technical argument than
proceeding.1 I was NASA’s Director, Astrophysics and led the Hubble Space
Telescope development team for eight years. After a successful launch, we
found that we were responsible for arguably the biggest screw-up in the history of
science. The $1.7B telescope could not focus! When it looked like things could
not get worse, the Failure Review Board named “leadership failure” as root
cause. After I put together the mission to repair the telescope, NASA promoted
me, and awarded me a second Outstanding Leadership Medal. I spent the last
15 years understanding how flawed “social contexts” cause failures of all kinds.
My book, How NASA Builds Teams (Wiley, 2009) documents both my journey
and findings. Finally, the Columbia Accident Investigation Board “adopted” Diane
Vaughan and her “social cause” reasoning in their findings (2003).

The Duality of NASA Team Performance
NASA project teams, like all project teams, need two complementary abilities.
They must have “hard side” technical knowledge (e.g. from university education)
and use project processes (e.g. from the PMBOK®). This is completely obvious
to members of technical teams performing complex projects.

They must also attend, perhaps equally, to the “soft-side” aspects of efficient
teamwork. This is often not obvious to technical team members. Perhaps this is




                                                                                   1
because becoming an expert technically is really difficult and consuming for
most—little energy remains for explorations into the “soft-side.” Perhaps it is
because advanced academic pursuits value individual performance far more than
team performance. In any case, technically trained people are frequently
resistant to team development activities. This paper discusses team development
activities that technical teams broadly and enthusiastically embrace.

Requirements for Efficient Technical Team Development
What are the requirements for a teambuilding processes that technical teams
would enthusiastically embrace? What do you think about these requirements?
  1. The core construct must be logical and durable, not management's “flavor-
      of-the-month;”
  2. Assessments must be brief, clear, and actionable;
  3. Team members need quantitative data showing effectiveness of the
      teambuilding processes, like everything else they do;
  4. Development processes must be sufficiently appealing that people want to
      use them; and
  5. Team members want to see progress that justifies their time “off the job.”

We now describe team development processes that meet these requirements.

   Requirement 1: The core construct must be logical and durable, not
   management's “flavor-of-the-month”

Our opening conversation with a new team leader (e.g. project manager (PM) or
functional lead) often goes something like this. The PM says, “I would like to
improve my team performance, but you must promise no touchy-feely. Our
response, “We promise no touchy-feely, what do you want?” PM, “I want an
atmosphere of mutual respect, where people feel included, with high creativity,
and clear organization.” Can you provide that without touchy-feely?” “We sure
can.” (Can you see the irony in this conversation?) The PM says, “OK how do we
get started?” We respond, “We always begin with an eight-behavior Team
Development Assessment. We need to benchmark your team’s performance
against “peer” teams so you can decide what you want to do next.” The team
leader provides team members’ e-mail addresses (typically about 24), and off we
go.

How do we take people into social development and avoid touchy-feely? We
frame the work as managing human behavior by using a coordinate system to
manage social context. (Technical people like coordinate systems.)

We explain that there is a “social field” than drives peoples’ collective behaviors
as surely as bar magnets align fine iron filings. (Technical metaphors are helpful
as well.) We ask, “Would you behave differently in each of these social
fields/contexts?”
     Making or receiving a marriage proposal;



                                                                                      2
    Making your first briefing to top management;
    Having dinner for the first time with the family of your spouse-to-be;
    At your bachelor or bachelorette party;
    When hijackers take over your honeymoon flight?
Would an observer who could only see your behaviors easily determine which of
these contexts you were experiencing? Of course, they could. If your behaviors
were not appropriate to the context, would others sanction you? Would you
receive a kick under the table from your spouse-to-be if you behaved
inappropriately?

Context and Character
How powerful is context? Malcolm Gladwell (The Tipping Point, 2000) argues
that our character has more to do with environment / context than who we are
innately. He says, “…the reason that most of us seem to have a consistent
character is that most of us are really good at controlling our environment.” This
claim is astounding. Does it bother you? The notion that character is primarily a
function of social context troubled me greatly when I first read it. I believed, for
example, that my consistently good and ethical behaviors were from upbringing.
The fact is that if I had a gun in my hand at the height of my divorce tension, I
might be in prison now. Similarly, I believed that my children’s good character
flowed primarily from their upbringing. I fully bought into the “inside-out” theories.

I are now convinced that Gladwell is correct in his claim that context trumps
character. My test of a theory is that of any social or physical scientist—does it
explain observed reality. Gladwell’s premise explains many behaviors, for
example, the U. S. Congress, the White House, and hostile divorce behaviors as
in the movie “War of the Roses.”

Context and Airline Crashes
During the early 90’s KAL was crashing big jets at 17 times the industry average
(Outliers, 2008). Things were so bad that the president of Korea refused to fly on
KAL planes. The cause was mysterious. KAL trained and certified pilots the
same as the rest of the industry. Alteon, a subsidiary of Boeing finally observed
what happened in the cockpit. When the Captain was driving, there was no role
for the first officer because of the rigid Confucian hierarchy in Korean society.
Even with an impending crash, the first officer had to speak politely and
deferentially to the Captain. Modern jets require two people to fly them. Typically,
one person flies the airplane and the other monitors the radio and manages the
engineering systems. Mismanaged social contexts crash 747 airliners.

Using a Coordinate System to Analyze Context
Therefore, if we emplace high-performance team contexts into NASA teams we
can enhance performance and avoid accidents. How can we identify the
characteristics of high-performance team contexts (and effective leaders)? A
popular expression in physics is, “Choosing the right coordinate system turns an
impossible problem into two really hard ones.”


                                                                                     3
Since we are dealing with human behavior, we turn to one of the master
psychologists of all time, Carl Jung. In 1905, he posited that we build our
personalities on our innate preferences for making decisions (logic or emotion)
and information (sensed or intuited).

We combine Jung’s work with the coordinate system of Rene Descartes invented
in the 17th century to build a tool to analyze FIGURE 1. “4-D” Organizing System
teams and leaders. Combining tools from the
17th and the dawn of the 20th century are not
“flavor of the month.” We organize
everything we do, assessments, workshops,
coaching and ad hoc “Context Shifting” in
this durable system. The Jungian-Cartesian
“4-D Organizing System” is in Figure 1.

The system analyzes (separates into simpler components) the core aspects of
teams and leaders into four “Dimensions”     FIGURE 2. The Four Dimensions
(hence, “4-D”) seen in Figure 2.
                                                      The Four “Dimensions”
The Dimensions address fundamental                 “Cultivating”   “Visioning”
human needs to feel valued, to feel we             Appreciating   Creating new
                                                   other people      solutions
belong, to believe in a hopeful future, and
to have clear expectations, with the                “Including”     “Organizing”
                                                   Appropriately      Clarifying
resources to meet them. Everyone wants           Including others   expectations
workplaces and lives that meet these basic
human needs. We find that addressing all
four of these Dimensions is both necessary and sufficient for high performance.
Our assessments, workshops, and coaching all align around these four
Dimensions.

Proving the 4-D Hypothesis
It is an interesting aspect of science that one can never actually prove a theory.
For example, there is no way to offer proof certain that Newton’s law of gravity is
correct. (Actually, it is not quite correct as Einstein discovered relativistic
corrections that have little to do with ordinary life.) Scientists believe laws are
correct when they repeatedly fail to disprove them. We cannot mathematically
prove that the four Dimensions are both necessary and sufficient. We can,
however look at some research data and a real NASA project.

Some Research Data
The 1993 edition of The Leadership Challenge summarized empirical data on
leadership effectiveness combining:
   1) A 1500 person survey by the American Management Association;
   2) A follow-up study of 80 senior executives in the federal government; and




                                                                                   4
   3) Finally, they report a study of 2600 top-level managers who completed a
   checklist of superior leadership characteristics.

When asked, “What do you most admire in leaders?” these studies reported the
following:
     First, 80 percent of the respondents said honesty. We demonstrate our
       honesty by how truthfully we relate with others and how openly we include
       them. This is a good match to “Yellow” Including leadership;
     Second, 67 percent said competence (productive, efficient). This is a good
       match to “Orange” Directing leadership;
     Third, 62 percent said forward looking. This is a perfect match to “Blue”
       Visioning leadership; and
     Fourth, 58 percent said inspirational. Caring about other people and
       appreciating them is a most effective way to inspire people, a good match
       to “Green” Cultivating leadership.

Conclusion: The correlation of the 4-D System with research data is encouraging.
During the early development of this material, we also validated that addressing
the four dimensions was “necessary and sufficient” with Gallup’s research (A
Hard Look at Soft Numbers, 1999) and a NASA mission, the Compton Gamma
Ray Observatory (see How NASA Builds Teams, 2009).

 Note: We introduced the color codes in workshops some years ago. Participants
preferred to use the color codes than the names of the Dimensions.

Today, the results from literally hundreds of NASA project, engineering and
management teams consistently validate the 4-D System—this is what matters
most.

   Requirement 2: Provide brief, clear and actionable Team and Individual
   behavioral Assessments

It is this simple. If you want the team context in Figure 3:
             FIGURE 3. A High-performance Team Context
      Mutual             Willing &               Sustained,        Seeing
    Respect &            Energizing              Effective        “Magical”
  Enjoyable Work        Collaboration            Creativity       Solutions


   Authenticity         High                   Outcome Focus      Clear and
     & Aligned,    Trustworthiness             with no Blamers    Achievable
  Efficient Action   & Efficiency                 or Victims     Expectations


Make the eight behaviors in Figure 4 behaviors habitual:


                                                                                5
                 FIGURE 4. The Eight Supporting Behaviors
       Express             Address                Express                Live
      Authentic             Shared              Reality-based           100%
      Appreciation         Interests              Optimism            Committed

     Appropriately         Keep All                 Resist          Clarify Roles,
       Include               Your                 Blaming &         Accountability
        Others            Agreements              Complaining        & Authority


Making these Behaviors Habitual
How do you make these behaviors habitual? Our clients combine 4-D
Assessments, Workshops, Coaching, and Reassessments as they choose and
the repetitive attention makes the behaviors habitual. As you will see,
Reassessments are the most cost-effective tool. We now examine the 4-D
assessment structure.

4-D Assessment Structure
Participants experience the following during their assessments:
    After signing in, they read a context-setting introduction page
    They then assess each of the eight behaviors including:
           o An explanation of why the behavior is important;
           o An example of the behavior from our experience;
           o A “standard” showing what “good” looks like (e.g. for Expresses
              Authentic Appreciation) the standard is: Habitually, Authentically,
              Promptly, Proportionally and Specifically (HAPPS);
           o A set of seven “radio buttons“ ranging from “Fully meet the
              standard” to “Never meet the standard” that they click; with
           o An opportunity to add explanatory comments for each behavior;
              and
The assessment completes with two additional broad questions. The “plus”
question asks, “What about the [ABC] team supports good performance?), and
the “delta” question asks, “What could the [ABC] team do to enhance
performance?” This latter question is a great source of action items, which we
urge each team to develop and assign during the assessment report briefing.

Quantifying the Assessment Results
We assign numerical values to each of the radio buttons. For example, we assign
a score of 100% to the choice of “Fully meets” because that is as good as a team
can be. We assign a score of 0% of “Never meets,” and intermediate scores to
the five choices in between.

We can then provide a number of statistical data products in 4-D assessment
reports. For example, we compute an average score for teams (and individuals).
A typical average score includes about 24 team members’ ratings of eight


                                                                                    6
behaviors (192 data points). Now we need a calibration scale to see if a given
score is OK or not. We use a histogram of hundreds of teams’ first assessment
scores to “benchmark” team assessment scores. For visual presentation
purposes, we draw a curve through the tops of these histograms and divide the
resulting figure into five quintiles, each with an equal number of teams (see
Figure 7, below.) We now examine the context of teams who score near the
bottom and top of the benchmarking scale.

Imagine two teams. Team “A” has a low assessment score, benchmarking near
the bottom of the curve. This
team context is in Figure 5.
                                  FIGURE 5. Low Performance Context
The atmosphere is be one of                  Low Team Assessment Score
mutual distrust, with little
                                       Unappreciated        Blind Optimism
interest in shared interests,
                                           & Conflict     & Low Commitment
so conflict easily ignites.
People do not feel included
and rampant broken                    Feel Disincluded     Victims/Blamers
                                         & Low Trust        & Disorganized
agreements destroy trust.
Members ignore realities are
in favor of blind optimism, which is willful ignorance. People are blaming each
other and victims are gathering in “clubs.”

Team “B” has a high assessment score, benchmarking near the top of the curve.
This team has the context in
Figure 6. The atmosphere is       FIGURE 6. High Performance Context
one of mutual respect, and
                                       High Team Assessment Score
collaboration is good
because people address the         Mutual Respect      Grounded Optimism
interests/needs that share          & Collaboration    & High Commitment
with others. Team members
meet peoples’ needs for
                                      Feel Included       No Drama
feeling included and
                                    With High Trust      & Clear RAAs
agreements are rigorously
kept, boosting trustworthiness. Team members fully acknowledge unpleasant
realities with a hopeful and mindset. Team members are 100% committed to a
successful outcome. Drama (blaming or complaining) are not tolerated. Everyone
is completely clear about what others expect from them and, they have the
resources they need to succeed.

Which team context would you rather work in? Which team is more likely to be
successful?

   Requirement 3: Provide quantitative data including performance
   benchmarking




                                                                                  7
Let us assume that an arbitrary team scores 78% using the methodology
described above. Recall that these scores are typically the average of 24
participants scoring eight behaviors. Is it a good score? To answer this question,
we need to compare it with “peer” teams using the benchmarking scale
discussed above. You can see in the Figure 7 that a score of 78% places the
team in the middle of the “above average” quintile.

      FIGURE 7. The Team Benchmarking Curve (300 Teams)

       Team Benchmarking Scale                            78%




                    Bottom Quintile    < Ave.   Ave. > Ave. Top Quintile



Is this score good enough? That is the team leader’s decision. Most NASA teams
proceed with a three-day workshop, followed by coaching and reassessments.
Do these stressed people find sufficient value to do additional work? We now
look at their voluntary participation.

   Requirement 4: Development processes must be sufficiently appealing that
   people want to use them.

We now examine the NASA Voluntary Participation since spring 2003.

The voluntary adoption of these processes by NASA project, engineering and
management teams astounded even us. Here are some statistics for that period:
    1,126 Team Development Assessments
    11,965 Coaching Sessions
    6,990 Workshop Person-days
    4,419 Individual Development Assessments
This is voluntary participation of 10% of the total NASA workforce. Given that we
focus on project, engineering and management teams this represents a
significantly higher percentage of NASA’s technical workforce.

   Requirement 5: Team members want to see progress that justifies their time
   “off the job.”

To see the progress most clearly, we first flattened the normal (i.e. bell shaped)
team “first assessment” distribution curve into five equally spaced quintiles. You
can see this is Figure 8 with the bottom quintile (bottom 20%) colored black and
going gradually lighter with the top quintile (top 20%) colored white.




                                                                                     8
We then plotted the average scores of teams that began in each quintile with
reassessments. For                  FIGURE 8. Progress of Bottom-quintile Teams
example, Figure 8
                                   40 Teams that began in the Bottom Quintile
shows the progress of
the 40 (out of 200)
NASA teams that began
in the bottom quintile.
The gray diamond is the first assessment, which you see in the bottom quintile.
(Not all bottom-quintile teams conducted reassessments, slightly offsetting the
diamond in the quintile.) We were startled to see the consistent and dramatic
improvement from recurrent 15-minute assessment events.

Then we thought, well, these teams started in the bottom so it was relatively easy
for them to move up. We then looked at the teams that started in each of the five
quintiles. Here are those results in Figure 9.

The numbers are our estimates of the working efficiency of anything that requires
teamwork. You can read the
logic of the estimate in How     FIGURE 9. Progress of 198 Teams
NASA Builds Teams. This
                                      198 NASA Teams With Multiple Assessments
chart stimulated me to write
How NASA Builds Teams. The
consistency of the                                                        84% 90%

improvement astounded me.
                                                                 77%   81%

Systemic Organizational
Improvement                                                    75%    79%
                                                           72%              83%
We found other interesting
effects in our assessment data.                          71%
                                                  66%             76%   80%
One day, we noticed an
interesting trend. The first
                                                 66%   70%
assessment of teams we had                53%                  75%

never previously worked with
seemed to be moving steadily higher with an improvement of 10% over the past
seven years. We verified the statistical accuracy with the “Student-t” test and
wondered whether we were systemically enhancing NASA team’s behavioral
norms. As trained professionals, we knew that “correlation is not causality.”

A colleague recently reported that during a 4-D workshop at the Marshall Space
Flight Center, a participant leapt to his feet saying, “I have been away for 18
months and returned to a far better working environment. Now I see why.” We
also knew that we had done more assessments and workshops at Marshall.




                                                                                  9
We isolated their data and saw their first assessment scores had improved 20%
over the same
period, Figure 10.    FIGURE 10. Progress in First team scores
We then noticed                         Average First TDA Scores per Year
that we had             90%
engaged 10% of
                        85%
NASA overall and
20% of the              80%

Marshall                75%
                                    Marshall
workforce.              70%

                      65%
Based on these
two data points, it   60%
appears that the      55%
                                                                  All NASA
systemic (cultural)
                      50%
improvement is              2003    2004          2005     2006       2007   2008
proportional to
                                           Linear (NASA)   Linear (MSFC)
participation!

Conclusion
We began with five requirements for a teambuilding system:
  • Logical and durable;
  • Brief, clear, actionable Assessments;
  • Quantitative data;
  • Development processes people want to use; and
  • Results that justify their time “off the job.”

We met the requirements with teambuilding processes that are gradually
enhancing not just the teams engaged, but the entire Agency’s performance.
Although workshops and coaching are also powerful developmental tools, this
paper focused on Team Development Reassessments because they are so
surprisingly efficient. They work because they:
    – Teach while they are measuring;
    – Use repetition to reinforce learning; and
    – Use standards to measure, and show what “good” looks like.

Visit “4-D Systems” at NASAteambuilding.com (www.4-DSystems.com). If you
want to be a 4-D Network member, click on “More Information,” then “Member
Agreement.”




                                                                                    10
                                References
Coffman, Curt and Harter, Jim. “A Hard Look at Soft Numbers.” Dallas, TX:
Nielson Group, 1999.
Gladwell, Malcolm. Outliers: The Story of Science. New York, NY: Little, Brown
       and Company, 2008.
Gladwell, Malcolm. The Tipping Point. New York, NY: Back Bay Books, 2002.
Kouzes, James and Barry, Posner. The Leadership Challenge (4th ed.) New
       York, NY: Jossey-Bass, 2008.
Pellerin, Charles. How NASA Builds Teams: Mission Critical Soft Skills for
       Scientists, Engineers, and Project Teams. New York, NY: Wiley and Sons,
       2009.
Vaughan, Diane. The Challenger Launch Decision: Risky Technology, Culture,
       and deviance at NASA. Chicago, IL: University of Chicago Press, 1996.




                                                                           11

								
To top