White Paper Report by hih90090

VIEWS: 0 PAGES: 80

									PBL Metrics Guide
                                        TABLE OF CONTENTS

Chapter/
Section           Title                                                                                   Page

Chapter 1 – BACKGROUND ............................................................................... 1
Chapter 2 – PURPOSE ........................................................................................ 2
Chapter 3 – IMPLEMENTATION CONCERNS .................................................... 3
   a.    Early Planning for Data Collection.............................................................. 3
   b.    Data Sources and Limitations .................................................................... 3
   c.    Burden on Field Units ................................................................................. 3
   d.    Automated Sources/Automated Data Recording ........................................ 4
   e.    Negative Analysis ....................................................................................... 4
   f.    Policy and Doctrine versus Warfighter PBA ............................................... 4
   g.    Data Review Boards .................................................................................. 4
   h.    Early determination of System Definition and Usage Factors .................... 4
   i.    Accounting for Model and Configuration Differences ................................. 4
   j.    Sample Data and Extrapolation versus Total Population ........................... 4
   k.    Funding Resources .................................................................................... 4
Chapter 4 – WHITE PAPER FORMAT ................................................................ 5
Chapter 5 – WHITE PAPERS .............................................................................. 8
   Section A – OPERATIONAL AVAILABILITY ..................................................... 9
   Section B – MISSION RELIABILITY................................................................ 20
   Section C – COST PER UNIT USAGE............................................................ 32
   Section D – LOGISTICS FOOTPRINT ............................................................ 41
   Section E – LOGISTICS RESPONSE TIME ................................................... 53
Chapter 6 – METRICS SPREADSHEETS ......................................................... 69
Chapter 7 – APPENDICES ................................................................................ 70
   Appendix A – Acronyms .................................................................................. 71
   Appendix B – Hierarchy Diagrams .................................................................. 75
   Appendix C – References ............................................................................... 78
                          Chapter 1 – BACKGROUND

Based upon the DUSD (AT&L) memo dated 16 August 2004, subject:
Performance Based Logistics: Purchasing Using Performance Based Criteria,
“performance” is defined in terms of military objectives using five criteria. The
five criteria are: Logistics Response Time (LRT), Logistics Footprint, Operational
Availability (Ao), Cost per Unit Usage (CPUU), and Mission Reliability (MR). A
22 November 2005 DUSD (AT&L) memo defined a sixth metric: Total Life Cycle
Systems Cost per Unit Usage.

The Joint Requirements Oversight Council (JROC) Memorandum: Key
Performance Parameter Study Recommendations and Implementation, 17
August 2006 addresses a mandatory Sustainment KPP (Materiel Availability) and
two mandatory supporting Key System Attributes (KSAs) (Materiel Reliability and
Ownership Cost) will be developed for all JROC Interest programs involving
materiel solutions. Materiel Availability will be addressed in the Operational
Availability section of this guide. Materiel Reliability and Ownership Cost will be
addressed in the Mission Reliability and Cost Per Unit Usage Sections,
respectively, of this guide.




                                        1
                              Chapter 2 – PURPOSE

The purpose of this guidebook is to document a “menu” of supporting metrics
and provide a summary of discussion points and considerations for a basis of
metrics to use in PBAs. This is not meant to limit PMs or to restrict the use of
metrics, but to provide PMs a tool to use with suggested metrics.

Each of the overarching criteria/metrics is broken down to include sub-elements,
definition, formula, data source, and proponent. Data sources are addressed
only briefly. The product of this effort is a series of spreadsheets containing this
data. These spreadsheets are located in Chapter 6 of this guide.




                                         2
                 Chapter 3 – IMPLEMENTATION CONCERNS

The Performance Based Agreement (PBA) is a written agreement between the
Product Support Integrator (PSI) and Product Support Provider(s) (PSP) that
defines the PSPs outcomes and the negotiated metrics that will measure their
success. A PBA will also exist between the Warfighter and the PM/PSI. The
metrics identified in this document should be used to assist the PM in choosing
the correct metric(s) to evaluate a PBL Program.

PBL must not become merely a matter of least cost against minimum readiness
criteria. It is important to be able to distinguish the nature of each metrics
requirement for deployed units versus those engaged in normal peacetime
training. Metrics may not always quantify the benefits of a more responsive and
robust reliability design and support approach for the Warfighter. The intent is
not to state that metrics are not an important focus. Rather, the intent is to show
that reducing the burden to the Warfighter, or providing mission flexibility by
reducing the maintenance burden or through increasing reliability, is not
something that can be valued in an exact trade-off. Judgment will always be
necessary in our decision process. The metrics are predicated on Warfighter
needs and the ability to capture the supporting data.

Each of the following implementation concerns will be addressed in paragraph f.
of the white papers. This section addresses concerns that are not metric-
specific.

       a.      Early Planning for Data Collection: Early Planning for the Data
Collection will be necessary and should examine each element and metric as to
the source of data and what factors will bear on its accuracy. While collection of
all elements is beneficial, those response elements that are discriminators should
be the focus of the PBL metrics planning approach. This is best accomplished
by engaging the User community in that discussion as early as possible in the
acquisition life cycle.

        b.    Data Sources and Limitations: Each Data Source must be
examined to ensure the accuracy and reporting frequency is consistent with the
measurement objectives included in the PBA. An essential component is to
recognize those limitations that are inherent or potential to each data source for
the element or metric. One limitation or caution that the PM must be aware of is
data collection for other service owned systems where Army is Executive
Service/Total Life Cycle System Manager (TLCSM). Although some models are
listed as data sources, they may serve as a data output where applicable.

      c.   Burden on Field Units: The collection of data will ideally be a
minimum burden on field units.




                                         3
        d.     Automated Sources/Automated Data Recording: Wherever
possible, an automated source of data or established data collection system,
directly related to the metric, should be used.

      e.       Negative Analysis: A recommended approach to planning is to
examine what possibly could cause a flawed or inaccurate data result. Incorrect
data will lead to incorrect assessment of the metric.

        f.     Policy and Doctrine versus Warfighter PBA: An important
consideration is whether the proposed Warfighter PBA imposes on either the
Warfighter or logistics provider some responsibilities that are in conflict with
logistics policy and doctrine (i.e., financial management guidance or other
operational policies).

       g.      Data Review Boards: A Standard Operating Procedure (SOP) for
the Data Review Board should be established. This should clearly state how
data will be reviewed and how membership will be selected and maintained. The
data review board and membership should be established by the PM.

       h.    Early determination of System Definition and Usage Factors: All
measurable metrics should be identified early in the life cycle. When reviewing
the Usage factors, the relevant range of use is also a planning factor. A
baseline should be established for each metric.

       i.     Accounting for Model and Configuration Differences: Multiple
system configurations may provide different response time characteristics.
Those differences need to be accounted for in the data collection scheme as well
as in evaluation and analysis.

        j.    Sample Data and Extrapolation versus Total Population: When
collecting sample data it is important to recognize the statistical standard error of
measurement. Data analyzers need to be aware of the collection and evaluation
methods.

        k.    Funding Resources: Resources used to enhance the logistics
enterprise and warfighting capabilities may be justified by improved Warfighter
readiness. The decision as to who will fund the data collection effort should be
determined at the earliest stage and should be written into the initial agreement,
to include every phase of the life cycle.




                                          4
                   Chapter 4 – WHITE PAPER FORMAT

A.   Concept

B.   Definition and Formula

     a.    Definition
     b.    Formula

C.   Considerations

     a.    Advantages
     b.    Risks
     c.    Assumptions

D.   Supporting Elements

E.   Implementation

     a.    Early Planning for Data Collection:

           1.     Where does this metric fit into the life cycle?

           2.     When should data be collected for this metric?

           3.     What impact would not collecting this data have on the
                  weapon system, Product Support Integrator (PSI), Product
                  Support Provider (PSP), PM, customer, etc?

           4.     What type data should you collect in each phase of the life
                  cycle?

           5.     What impact will early data collection have on peace and
                  wartime scenarios?

     b.    Data Source and Limitation:

           1.     Identify who is responsible for collecting and reporting the
                  data to the PSI, Provider, PM, customer, military personnel,
                  government agencies, etc.

           2.     What mechanism will be used to collect, report, retrieve, and
                  maintain data for this metric? How reliable is the data?

     c.    Burden on Field Units:




                                       5
             1.     Identify if the Soldier is going to be required to collect the
                    data.

             2.     Can the Soldier collect this data in both war and peacetime?

             3.     Will the collection of this data be under STAMIS or a
                    stovepipe system?

             4.     Will the data collected be transferable via system to system,
                    PM to PM, etc.?

      d.     Automated Sources/Automated Data Recording:

             1.     Can data be automatically collected, maintained, and
                    retrieved? How?

             2.     State differences and errors that may be detected during
                    automatic retrieval or recording.

             3.     When can the data be automatically obtained?

        e.     Negative Analysis: What are the impacts, on all parties concerned,
if the proper data is not collected at the right time?

      f.     Policy and Doctrine versus Warfighter PBA:

             1.     What policies may be affected concerning this metric?

             2.     Are there existing regulations/policy/doctrine that contain
                    more metric specific information/ guidance?

      g.     Data Review Boards (DRB): Who will evaluate the metric?

      h.     Early determination of System Definition and Usage Factors:

             1.     Are there caveats/ concerns for specific systems (i.e.
                    communication systems vs. aircraft?)

             2.     Clearly identify the usage factors for this metric.

             3.     Are there system of systems issues that need to be
                    addressed?

      i.     Accounting for Model and Configuration Differences:




                                         6
             1.    What impact do different configurations have on data being
                   collected, reported and/or retrieved?

             2.    Are all configurations being reported in the same manner?

      j.     Sample Data versus Total Population: When is it feasible to use
sample sizes versus total population?

      k.     Funding Resources

             1.    What impact will collecting data have on funding?

             2.    Who is responsible for funding the data collection effort?

F.    Implementation Concerns

G.    Summary




                                       7
                     Chapter 5 – WHITE PAPERS

Section   Metric                                ____   Page

A         Operational Availability                      9

B         Mission Reliability                          20

C         Cost Per Unit Usage                          32

D         Logistics Footprint                          41

E         Logistics Response Time                      53




                                     8
                   Section A – OPERATIONAL AVAILABILITY

A.     Concept

        A primary goal of PBL is to cost effectively improve reliability,
supportability, and maintainability. Operational Availability (Ao) is a weapon
system or system of systems (SoS) readiness measurement indicator that
explicitly considers the interactions among reliability, availability, and
maintainability in keeping weapon systems available to perform their missions.

The CJCSM added Materiel Availability as a mandatory KPP. KPPs are those
attributes or characteristics of a system that are considered critical or essential to
the development of an effective military capability and those attributes that make
a significant contribution to the characteristics of the future joint force as defined
in the Capstone Concept for Joint Operations (CCJO).

CJCSM defines Materiel Availability as a measure of the percentage of the total
inventory of a system operationally capable (ready for tasking) of performing an
assigned mission at a given time, based on materiel condition. Materiel
Availability also indicates the percentage of time that a system is operationally
capable of performing an assigned mission, and can be expressed as
(uptime/(uptime + downtime)). This materiel availability definition and formula
coincide with the following operational availability information.

B.     Definition and Formula

       a.     Definition

              The percent of time that a weapon system or SoS is mission
              capable

       b.     Formula

              Over any period of time, the directly measured Ao (post-fielding) is:

              Ao = Up Time / Total Time = Up Time / (Up Time + Down Time)

              The expected long-term, steady-state Ao (throughout the life cycle)
              is determined from the classic formula:

              Ao = MTBF/ (MTBF + MTTR + MLDT)

              Where,
                 MTBF = Mean Time Between Failures
                 MTTR = Mean Time To Repair
                 MLDT = Mean Logistics Delay Time


                                          9
C.   Considerations

     a.   Advantages
                 Ability to tie reliability, maintainability, and supportability
                  decisions directly to operational performance of weapon
                  systems or SoS
                 Trade off reliability, maintainability, and spares support costs
                  to best achieve Ao targets
                        Reliability improvements versus life cycle spares
                         savings.
                        Readiness Based Sparing (RBS).
                        Level of Repair Analyses (LORA) that determine the
                         most cost effective maintenance policies.
                 The Army has standard Ao-driven RBS and LORA models
                  that are available for use by Government and Contractors.
                        The Army’s standard RBS model is the Selected
                         Essential item Stock for Availability Method
                         (SESAME) model which can determine:

                            The most cost effective set of spares and repair
                             parts to support an Ao target;
                            The expected Ao that will be achieved from an
                             input set of spares and repair parts;
                            The most cost effective plus up of spares to
                             increase Ao.

                        The Army’s standard LORA model is the
                         Computerized Optimization Model for Predicting and
                         Analyzing Support Structures (COMPASS) which
                         determines the most efficient maintenance policies to
                         achieve an Ao target.

     b.   Risks
                 When observed Ao is significantly below or above
                  expectations, it may be difficult to drill down to determine
                  root causes
                 Evaluation tools need to be used to determine whether the
                  observed Ao has been achieved efficiently.
                 If Army Standard Sparing to Availability (STA) models are
                  not used after Milestone B and prior to fielding, the


                                       10
                Warfighter could have lower readiness rates for the money
                spent on supportability or the Government will significantly
                overspend to attain a desired readiness rate. Moreover, it is
                critical that the Army’s RBS models are exercised as early
                as possible in the systems acquisition life-cycle. By using
                these tools early on in the systems life-cycle, logistics
                support alternatives can be traded off without making large
                financial and time investments. Just as importantly, these
                tools should be used as the system is refined and the data
                matures.
     c.   Assumptions
               Contractors can be made responsible for the Ao factors
                under their control
               Data are available to utilize RBS and LORA models
               A Product Support Integrator (PSI) may utilize the Ao (SoS)
                to ascertain and define a PBL support package to enable a
                Unit of Action to successfully conduct the full range of
                tactical mission.
               The Total Life Cycle System Manager (TLCSM) may utilize
                the Ao (SoS) in the PBA between the PM and Warfighter as
                a system of system PBL strategy to help flow down
                performance criteria to a potential PSI
               PSI, Product Support Providers, and Third Party Logistics
                providers running non-Army standard STA models will meet
                Army modeling and simulation requirements
               The LORA is an integral toll in support of the BCA
               The conduct of a LORA and the use of the SESAME model
                are required by AR 700-127

D.   Supporting Elements

         Mean Time Between Failures (MTBF): Mean time or mileage
          between system aborts. Aborts are incidents that cause a system
          to be unable to start, be withdrawn or unable to complete a mission.

         Mean Time to Repair (MTTR): The mean time to diagnose,
          remove, and replace faulty spares at a specific level of
          maintenance (focused on task duration).

         Mean Logistics Delay Time (MLDT): The mean time to obtain a
          serviceable spare part.

         Stock Availability at designated level of supply (SA):
          Percentage of time that an order for a Line Replaceable Unit (LRU)
          can be filled immediately at designated level of supply support.


                                   11
   Mean Calendar Time Between Failures (MCTBF): The average
    calendar time between failures causing down time. This measure
    may be applied when operating usage metrics are not reported.

   Mean System Restoral Time (MSRT): The mean calendar time to
    restore a system after it fails.

   Mean Restoral Delay Time (MRDT): The average amount of
    down time per failure not due to the system's designed
    maintainability to restore a system when appropriate spares are
    available forward to repair the system.

   Mean admin Delay Time (MadmDT): The average period of down
    time awaiting logistics resources other than spare parts or travel
    time for maintenance. It includes time awaiting availability of
    qualified maintenance personnel, support equipment, facilities, etc.

   Mean Outside Assistance Delay Time (MOADT): The average
    time awaiting maintenance at other locations when not available
    with the system. A contactor maintenance team or field repair team
    travel to the system operating sites to perform maintenance or
    evacuation of systems to maintenance facility and return to
    operational site are examples.

   Mean Supply Response Time (MSRT) at Forward Level: The
    average time awaiting forward level supply when not forward with
    the system to receive an available spare LRU to accomplish system
    level repair. Examples include: the shipping of forward level
    spares to system or contact team travel to system with spares.

   Mean Time to Obtain Back Orders (MTTOBO): Average time to
    fill a back order at the wholesale supply level.

   Customer Wait Time (CWT): The supply chain performance
    metric which measures total customer response time (the time
    required to satisfy a supply request from the end user level). CWT
    measures pipeline performance from the unit’s perspective.

   Order and Ship Time (OST) to a designated level of supply:
    Average time from order placement to receiving the shipment at
    designated supply level.

   Requisition Wait Time (RWT): An Army supply chain metric
    which measures the elapsed time required to satisfy an SSA
    requisition that must be sourced from either wholesale or referral



                              12
     process. RWT measures source of fill performance from the SSA
     perspective.

    Retrograde Ship Time (RST) or Total Retrograde Time (TRT):
     The average elapsed time from an item failure to the receipt of the
     item by the maintenance echelon specified to repair the item.

    Turn Around Time (TAT): The average time required to receive
     an item from a unit, perform repairs on the item and make the item
     available to the unit or place the serviceable item back into the
     inventory.

    Administrative Logistics Delay Time (ALDT): The average down
     time per failure not due to the system's designed maintainability.

    Controlled Substitution Rate: A measure of the number of
     controlled substitutions per time period for a fleet of vehicles. This
     number may be used as a means of comparison over a series of
     previous reporting periods to identify any trends in supply within a
     fleet.

    Hardware Corrective Maintenance Ao (Ao from Sparing): The
     percent of time that a weapon system is available caused by
     hardware failures to the system that uses spares and/or corrective
     maintenance to restore the system.

    Failure Factor: The average number of critical item demands per
     100 end items per year.

Maintenance Task Analysis Elements

           Operational Readiness Rate (ORR): The experienced
            probability that reported weapon systems are considered up
            for the day.

           Full Mission Capability Rate (FMC): The experienced
            percent of time that a system, with all its supporting
            subsystems, is fully functional. The equipment has to be on-
            hand and able to perform its combat mission.

           Not Mission Capable Supply (NMCS): The time (days or
            hours) the system is inoperable due to delays in
            maintenance that are attributable to delays in obtaining
            parts.




                                13
                Not Mission Capable Maintenance (NMCM): The time
                 (days or hours) the system is inoperable due to delays in
                 maintenance that are attributable to delays in obtaining
                 maintenance resources (personnel, equipment, or facilities).

                Partial Mission Capable (PMC): The percent of time the
                 system is not fully functioning, but the critical supporting
                 subsystems are functional.

     The following pulse availability metrics are expected to be used on
     the Future Combat System (FCS) and Army Transformation to
     Modularity:

                Ao SoS Index: Quantifies the support requirements for a
                 system of systems, family of systems to conduct the full
                 range of tactical mission over the duration of the combat
                 pulse. The Ao (SoS) Index requires the following: reliability,
                 availability, maintainability (RAM) data required for
                 implementation and level of repair analysis data. It will
                 include supporting metrics such as: Network enabled Ao,
                 Administrative and Logistics Down Time (ALDT), and
                 Maintenance Ratio. It is a timely tool as the Army institutes
                 the strategic organizational adjustments that transforms to a
                 modular force structure capable of integrating the weapons
                 systems in accordance with Combatant Commander
                 Battlespace strategies. The Ao (SoS) Index provides
                 situational awareness of a single platform in a Unit or the
                 entire Unit or Modular organizational structure.

                Average Pulse Availability: The average percentage of a
                 force that is mission capable over the course of a combat
                 pulse. It measures the average level of combat power
                 available during a combat pulse.

                Minimum Pulse Availability: The minimum level of Pulse
                 availability (% of force that is mission capable) that a force is
                 expected to maintain over the course of a combat pulse.
                 Measures the minimum expected level of combat power that
                 will be available over the course of a combat pulse. It
                 provides insight into the minimum level of equipment and
                 tactical footprint availability necessary to keep a combat
                 force effective.

E.   Implementation

     a.    Early Planning for Data Collection:


                                     14
     1.    Where does this metric fit into the life cycle?

           Ao is applicable throughout the entire life cycle.

     2.    When should data be collected for this metric?

           Between Milestones A and B use sample data or
           engineering estimates. Prior to Milestone C you may want to
           use demand or history data collected during test or data
           used from systems fielded under urgency. Secondly, one
           should use SESAME or COMPASS models, etc., between
           Milestone B and Fielding. The SESAME model may be used
           after fielding to plus up sparing if desired readiness levels
           are not being achieved.

     3.    What impact would not collecting this data have on the
           weapon system, PSI, Provider, PM, customer, etc.?


           Not collecting Ao data throughout the life cycle may
           negatively impact the design and quality of parts used as
           well as the materiel used during manufacturing.

b.   Data Source and Limitation:

     1.    Who is responsible for collecting and reporting the data
           to the PSI, Provider, PM, customer, military personnel,
           government agencies, etc.?

           All parties involved may be responsible for collecting and
           reporting the Ao data. Who collects, evaluates, and report
           performance readiness data depends on the stage in the life
           cycle.

     2.    What mechanism will be used to collect, report, retrieve,
           and maintain data for this metric?

           The performance readiness data should be available from
           several STAMIS data basis that report MTBF, MTTR, MDT,
           and MTBM, etc. Ao for systems undergoing initial issue
           fielding or post fielded systems can also be captured when
           using approved DoD mathematical models (SESAME,
           COMPASS, etc.). Pre-fielding and engineering data used to
           determine Ao will be replaced with more reliable data as post
           test and demand history data becomes available. The PSI
           has the autonomy to select the model that provides the



                               15
            optimal opportunity to determine PBL functions that may be
            transferred to a statement of Work or statement of Objective
            under a competitive selection process.

c.   Burden on Field Units:

     1.     Identify if the Soldier is going to be required to collect
            the data.

            After a system has been fielded the Warfighter will be
            responsible for reporting some data required to determine
            the Ao of fielded systems. The data required may be
            annotated on DA maintenance and repair forms. The
            required data will help determine the meantime between
            failures as well as the mean down time. (Reference: AR
            700-138, DA PAM 738).

     2.     Will the collection of this data be under STAMIS or a
            stovepipe system?

            This data will also be reported to LOGSA and retrievable
            through a STAMIS database. The data is as reliable as the
            information reported on a DA Form 2407, DA Form 2404, DA
            form 2406, DA Form 1352 or through the Unit Level Logistics
            System (ULLS).

     3.     Will the data collected be transferable via system to
            system, PM to PM, etc.?

            Ao collected on one system and received at LOGSA is
            transferable for use on a similar system or for evaluations by
            PSI, unit commanders, PMs, etc.

d.   Automated Sources/ Automated Data Recording: Can data be
     automatically collected, maintained, and retrieved?

     During Milestones A and B the data may not be automatically
     retrievable since the system is in the initial stages of the life cycle.
     However, once systems have completed testing and the system is
     being used by the Warfighter, the Ao data is collected within
     Platform Soldier Mission Readiness System (PS MRS) and
     Logistics Decision Support System (LDSS) via GCSS-Army and
     ULLS (PS MRS and LDSS are FCS products under development.).




                                16
e.   Negative Analysis: State the impact, on all parties concerned,
     if the proper data is not collected at the right time, by the right
     person (or system).

     If Ao data is not collected and evaluated at the earliest stages of
     the life cycle, a basis will not be available to determine early design
     improvements in reliability, maintainability, or supportability of a
     weapon system. This may cause a system or SoS to be fielded
     without taking advantage of the life cycle phases that offer the best
     opportunity to reduce acquisition and sustainment cost. Moreover,
     the Army is mandated to quantify the Life-cycle cost associated
     with a system. The use of the COMPASS and SESAME models
     aid in quantifying these costs.

f.   Policy and Doctrine: What policies may be affected concerning
     this metric?

     All PBL actions taken should be within regulation. The policy
     governing Ao is DoD 5000 and the PBL guide. Ao may be
     impacted by Army policy concerning contractors on the battlefield.

g.   Data Review Boards: Who will evaluate the metric?

     Depending on the point in the life cycle, this will determine the
     composition of the data review board. Prior to MS C, a system
     evaluation will be conducted based upon the results of testing that
     evaluates the reliability, availability, and maintainability of a weapon
     system. However, post testing and fielding, the PSI or PM will
     determine how the Ao metric will be defined and evaluated. The
     Data Review membership should include representatives from
     Quality Assurance, Quality Control, Engineering, G-3/S-3, G-4/S-4,
     G-8, and Resource Management. ATEC (including DTC, OTC, and
     AEC) should also be given the opportunity for membership on the
     DRB.

h.   Early Determination of Systems Definition and Usage Factors:
     Are there system of systems issues that need to be
     addresses?

     System of systems considerations will be made if systems are
     dependent on other systems, PMs, contractors, or Soldiers.

i.   Accounting for Model and Configuration Differences:




                                17
     1.     If there are different configurations in the field or that
            are planned to be fielded, what impact does this have on
            data being collected, reported and/or retrieved?

            For configurations that vary the number of subassemblies
            used per system, the weighted average LRU failure factor
            input may be used for those LRUs within a subassembly with
            varied quantities. For technology insertion configuration
            changes to the hardware later in the life cycle, a SESAME
            run with the new configuration is needed to evaluate Ao or
            determine plus up sparing mix requirements for the new
            system configuration.

            For a system using some common items that go into other
            systems used by a Unit, the system Ao may be allocated to
            both the system unique items and common items used in the
            system. The Achieving a System Operational Availability
            Requirement (ASOAR) model or methodology may be
            exercised to estimate a cost effective Ao allocation or two
            separate SESAME runs may be performed.

     2.     Will there be a different PSP responsible for a new
            configuration/ modification?

            The PSI has the configuration management over the model
            selected that provides the best method for optimizing Ao.

j.   Sample Data and Extrapolation versus Total Population: When
     is it feasible to use sample sizes versus total population?

     The use and size of sample data is system and deployment
     dependent. Recommend using a sample size instead of the total
     population for ease of collection and reduced cost. Sample size
     needs to be established in the PBA. The sample size should be
     statistically significant to allow for meaningful extrapolation across
     the total population.

k.   Funding Resources:

     1.     How will the collection, reporting, retrieval, and
            maintenance of data be funded?

            RDTE 6.3a during Technology Development Phase. RDTE
            6.3b during System Development and Demonstration Phase
            Procurement funds during Production Deployment Phase
            TRM Dollars during Operations and Support Phase.



                                18
             2.     Who is responsible for funding the data collection
                    effort?

                    The PM is responsible for funding the data collection effort
                    for contract support.

F.    Implementation Concerns

            The PBA must specify the expected cost to the government of
             achieving the target Ao. The primary goal of PBL, when using the
             Ao metric, is to effectively and efficiently achieve the Ao target.
            The PBA must be specific about all assumptions and constraints
             used in determining the cost to the government to achieve the Ao
             target specified in the agreement:

                   Force structure
                   Logistics footprint
                   Reliability estimates
                   Maintainability/Repair estimates
                   Supply Chain parameters
                   Other

            The PM should utilize the Army standard Ao driven models, under
             the PBA specified assumptions and constraints, to determine that
             whether cost estimates for achieving the Ao targets are reasonable.
             This should be done as early as possible in the acquisition process.
            Judgment will be necessary in the PBL decision process to
             consider readiness, cost and performance trade-offs before
             requiring an Ao at a given cost range to the government. Ao should
             be measured over a set period of time. Effective and efficient use
             of models at least cost to the government is key.
            Prior to Milestone B, PBL needs to be addressed to ensure that
             supportability and PBL are related to the systems engineering and
             requirements processes to more effectively improve reliability,
             maintainability, availability and readiness and reduce life cycle cost
             and strategic, operational and tactical logistics footprint based on
             an optimal sparing mix.

G.    Summary

       The Ao metric requires the use of Army standard sparing to availability
models. The Ao Index is a capable PBL tool that helps establish the functions a
PSI must undertake to effectively meet the Warfighter’s combat performance
requirements.



                                       19
                       Section B – MISSION RELIABILITY

A.     Concept

       As Performance-Based Logistics (PBL) seeks to gain tangible logistics
performance as opposed to merely predicting performance, it is necessary to be
able to measure Mission Reliability (MR) under real conditions of use. Therefore,
the MR concept is to focus on metrics issues for a weapons system once it has
been fielded and not in a formal test environment.

The CJCSM identifies Materiel Reliability as a mandatory Key System Attribute
(KSA). KSAs are those system attributes considered most critical or essential for
an effective military capability, but not selected as a KPP. Materiel Reliability is
defined as a measure of the probability that the system will perform without
failure over a specific interval. Materiel Reliability is generally expressed in terms
of a mean time between failure(s) (MTBF), and once operational can be
measured by dividing actual operating hours by the number of failures
experienced during a specific interval. The information that follows addressing
Mission Reliability can also be applied to Materiel Reliability.

The application of the MR metric will require a rigorous approach to defining the
detailed components of the metric; planning for data collection, reporting and
analysis; and implementation into a feasible Performance Based Agreement
(PBA). This will require a disciplined method to determine the supporting factors
of MR. The “System” must be defined to allow consistency in the measurement
process. The system could be a group of items (e.g., fleet, single Weapons
System, or critical End Item). Also required is a detailed lay down of what
Operational Performance (OP) represents. Furthermore, the mission must be
defined to establish the boundaries and duration for which we are measuring OP.

Early in the program, the systems characteristics that will enable both a high
operationally reliable system and provide health monitoring information for
reliability should be identified. These characteristics should be included in
specifications, contracts, and requirements documents.

B.     Definition and Formula

       a.     Definition

              The measure(s) or ability of a system to achieve Operational
              Performance (OP) for a defined mission or specified mission profile.

       b.     Formula

              Number of successful missions
              Number of attempted missions


                                         20
                  Alternate Use of Formula: this may be used where MR
                  success is best measured in terms where discrete mission
                  success does not provide best meaning for this metric. This
                  is the formula where OP is measured as a percentage of
                  Mission Duration (MD). Therefore:

                   MR = Total Operational Performance (TOP) for MD
                          ----------------------------------------------------------
                               Total Mission Duration (TMD)

C.   Considerations

     a.   Advantages

                 Use of MR incentives the continuous improvement of
                  reliability as opposed to improving availability by providing
                  additional spares
                 Use of this metric will provide insight into the underlying
                  causes of inadequate MR and identity the components that
                  require improvement.
                 If this metric is applied to different usage profiles, then it will
                  permit visibility into areas such as Wartime/Peacetime
                  impact on MR. This could allow further analysis of potential
                  MR war stress on equipment, increased OPTEMPO impact
                  on the MR, and useful life of a weapons system.
                 Data from this metric will be valuable to improve
                  requirements and design for Reliability, Availability &
                  Maintainability (RAM) engineering practices for future
                  systems. It will also serve planners to provide better data to
                  support Surge and Deployment preparations.

     b.   Risks

                 Software Failures: The Identification of software failures as
                  opposed to hardware failures and how software failures will
                  be evaluated and corrected in a relevant fashion (advisory
                  technical instructions to allow users to avoid failure modes)
                  are other complicating factors in applying the measurement.
                 Measurement in Wartime: Depending on how the incident
                  data is collected will determine how fragile the metrics
                  process will be converting from a peacetime mode to a
                  wartime mode. Also, to the extent that any aspect of this
                  collection and measurement process represents extra
                  activity for warfighting organizations over and above the
                  direct and essential warfighting mission, this means that data


                                         21
                  collection in wartime has to be subjected to the common
                  sense realism test. Wartime may constrain the ability to
                  perform to serviceability standards (e.g., preventative and
                  corrective maintenance standards) assumed in definitions.
                 Costs and Resources: Care should be taken to consider
                  the costs and resources as a burden to collect accurate
                  subordinate metrics for MDs and failure incident data at the
                  lowest level.
                 Recognize systems: It may be difficult to distinguish Active
                  Army versus Reserve and National Guard (NG) systems and
                  systems in Float or Pre-positioned Status. An actual
                  operating unit (i.e., hours, miles, rounds, etc.) needs to be
                  known and not just failures.
                 Tech Insertion/Configuration Control: It is important to
                  understand whether different technical versions or
                  production lots are sources of failure during systems
                  evaluation and or analysis of data collected.

     c.    Assumptions

           The system can be defined in a manner that failure to achieve
           operational performance is relevant.

                 All the data elements of failure and mission can be collected.
                 Adequate discipline can be instituted in the data collection
                  process, the review process (of collected data), and the
                  analysis process.

D.   Supporting Elements
     Subordinate metrics serve to determine failures and to what measurement
     failures of MR are held against.

          System (Sys): a single end item, a group of like end items, or
           some other warfighting operational combination (e.g., a number of
           combat vehicles, aviation sortie, or fleet).

          Operational Performance (OP): functional achievement of a
           Warfighter capability where the inability to achieve that capability
           reflects failure.

          Mission (Msn): the established mission profile or measurement
           base for which the “system” must provide OP to be characterized
           as meeting Warfighter requirements.

          Successful Missions (Scsf Msns): Those missions for which the
           System Definition has met Operational Performance requirements


                                      22
          within the Mission Profile as limited by the Defined Mission
          definition.

         Attempted Missions (Attpt Msns): The total missions for which
          the System Definition has been undertaken against the Mission
          Profile as limited by the Defined Mission definition. This includes
          missions that did not meet Operational Performance requirements.

         Mean Time Between Operational Mission Failure (MTBOMF):
          The average operating hours between the occurrences of
          operational mission failure. This measure can only be applied
          when operating hours are reported.

         Mission Duration (MD): The average operating hours to
          accomplish a mission.

         Mean Time Between System Abort (MTBSA): Mean time or
          mileage between system aborts. Aborts are incidents that cause a
          system to be unable to start, be withdrawn or unable to complete a
          mission.

         System Abort (SA): An event that results in the loss or
          degradation of an essential function(s) that renders the system
          unable to enter service or causes immediate removal from service,
          deadlines the platform, or makes it non-mission capable.

         Mean Calendar Time Between Failures (MCTBF): The average
          calendar time between failures causing down time.

         Mean Time Between Failures (MTBF): The average operating
          hours between failures to all critical items in the weapon system
          that are serially configured.

E.   Implementation

     a.   Early Planning for Data Collection:

          1.    When should data be collected for this metric?

                A data collection plan should be developed as early as
                practical in the PBL process. This plan should be
                established during the Pre Milestone A Phase of the
                Acquisition Life Cycle and the plan should be updated as the
                metric transitions from a candidate metric to inclusion in a
                PBA.



                                    23
     2.    What type data should you collect in each phase of the
           life cycle?

           Consider the following in the plan:
                Failures and Incident Data: In planning MR data
                 collection, consider the current capability to collect
                 and what is emerging as future or planning
                 capabilities for business systems. In particular,
                 examine the technology component as sometimes
                 failure incident information can be determined “off
                 equipment” from failed Line Replaceable Units
                 (LRUs).

                 Mission: Consider how MD (e.g., Operating Units) will
                  be collected. Also, consider how conditions of use
                  will be validated (e.g., temperature range).

                 Reporting of Data Collection: Data must be reported
                  in a way that permits analysis and review board
                  investigation and discussions. Timeliness of reporting
                  is a consideration in developing the data collection
                  plan. Consider the similarities and differences
                  between standard readiness reporting and PBL
                  metrics. An investigation and/or evaluation should be
                  done to determine if the data can be collected directly
                  from the Weapons System.

     3.    What impact will early data collection have on peace and
           wartime scenarios?

           Consider what the impact is to data collection
           and analysis for systems deployed in War or hostile
           environments vs. those used in a training or a more
           permissive environment.

b.   Data Source and Limitation:

     1.    Identify who is responsible for collecting and reporting
           the data to the PSI, Provider, PM, customer, military
           personnel, government agencies, etc. What mechanism
           will be used to collect, report, retrieve, and maintain
           data for this metric?

                 Leveraging Technology in the Weapons System. The
                  first consideration should be an attempt to determine
                  what data can be directly accumulated from a


                               24
                 weapons system. This can be as simple as an
                 odometer reading or as complex as a system’s
                 diagnostics being transmitted across a
                 communication path back to a collection point which
                 collects both mission duration and failures. The
                 discussion about definitions of MR should guide the
                 process to determine performance related design
                 requirements for embedded MR Data collection
                 capabilities. Cautions still exist in factoring in human
                 elements and other issues beyond the scope of the
                 technology.
                Leveraging the Current Logistics Business Systems.
                 Current business systems can collect data through
                 maintenance reporting processes. The gap between
                 data currently available may be augmented though
                 agreements to provide supplemental data either
                 through standard collection methods or by additional
                 bolt-on processes and systems. A laydown of the
                 Standard Army Maintenance Systems (STAMIS)
                 related factors from a particular system (e.g., Unit
                 Level Logistics System (ULLS) or through Logistics
                 Integrated Data Base (LIDB)) should be done in early
                 stages of PBL Metric definition & PBA planning.

     2.    How reliable is the data?

           The MR metric is heavily dependent on accurate mission,
           MD, and failure incident information to calculate correct
           metric value. When using a manually entered STAMIS data
           collection system, a review should be undertaken to
           determine how the data is going to be collected, entered and
           retrieved. Through the PBA, it may be necessary to
           emphasize any procedural requirements to ensure data
           fidelity. Where possible, consider employing additional data
           sources that can assist in identifying manually introduced
           errors. A key item to remember when considering how to
           collect and retrieve data is to minimize the burden on the
           Warfighter for these efforts.

c.   Burden on Field Units:

     1.    Identify if the Soldier is going to be required to collect
           the data.

           One potential data source for MR is from OR reports which
           are generated from Unit Status Reports, DA Form 2715, (AR


                              25
          220-1 – Unit Status Reporting). This data is gathered
          monthly and is based on input from all staff sections of Units.
          The report provides broad readiness related information that
          includes information on personnel and logistics support.
          When choosing to use OR reporting data in your MR metric
          calculations, caution must be employed. This data might not
          be correctly applied to formula and may be limited by the
          reporting frequency and content.
          Leveraging the existing unit’s operational and logistics
          procedures to the maximum extent possible for MR requires
          an understanding of the data entry task. This means
          determining if a Soldier dependent data source (e.g.,
          Equipment Log Book) will have sufficiently accurate data to
          provide accurate information associated with the above
          mentioned supporting metrics.

     2.   Can the Soldier collect this data in both war and
          peacetime?

          Wartime data collection may be a challenge because it will
          likely not be a Theater Command priority to expend
          resources to collect this data. If the data collection
          procedure or system that provides the MD, failure data and
          supporting information relies on peacetime or a garrison
          environment, it will not translate accurately into wartime.
          Operational Readiness reports are applicable to Wartime,
          but some of the same issues about failure and mission data
          may also apply.

     3.   Will the collection of this data be under STAMIS or a
          stovepipe system? How reliable is the data?

          The data needed from a STAMIS or a stove-pipe system
          must relate specifics about MD or missions to failures and
          OP. Also, these must be directly traceable to a specific
          unique identifiable item of equipment. Operational
          Readiness reports are available through LOGSA and may
          have information directly associated with MR issues. The
          data is as reliable as the unit commander’s reporting the
          data. The use of STAMIS reports is encouraged.

a.   Automated Sources/Automated Data Recording:

     1.   Can data be automatically collected, maintained, and
          retrieved?




                             26
           When collecting operating time or failure incidents, the
           weapons system may provide diagnostics or automated
           incident reporting that can be captured and reported to a
           data collection site.

     2.    What type of data collection device should be used?

           When using a manually entered STAMIS data collection
           system, a review should be undertaken of how that data is
           entered. Through the PBA, it may be necessary to
           emphasize any procedural requirements to ensure data
           fidelity. Where possible, consider employing additional data
           sources that can assist in identifying manually introduced
           errors. A key item to remember is not to induce any
           substantial burden to Warfighters.

     3.    State any differences and risks that may be detected
           during automatic data retrieval or recording.

           Embedded capability and health monitoring systems must be
           tested for accuracy and usability in MR metric or the result
           will be flawed. The STAMIS related systems have user-data
           entry error risk. It must be noted that under certain
           circumstances, the STAMIS data itself may not be
           convertible into a MR Metrics formula.

     4.    When can the data be automatically obtained?

           The MR Metric needs to collect data at point of ownership.
           One issue is how to apply the MR Metric to what’s currently
           available for automation and reporting (A&R). The MR
           metric does have some potential to take advantage of
           embedded capability (e.g., counting mechanisms like
           elapsed time, diagnostics, etc).

e.   Negative Analysis: State the impact, on all parties concerned,
     if the proper data is not collected at the right time, by the right
     person (or system).

     Flawed manual entry, misclassification of failures, and flaws in
     calculation logic when determining a parametric value like Mean
     Time Between Failure (MTBF) are negative events. A
     recommended approach to planning is to examine what possibly
     could cause a flawed or inaccurate data result. Steps then can be
     included to eliminate or mitigate the cause of the inaccuracy or a
     different data collection approach may be deemed appropriate.


                              27
     The analysis of data should be conducted by a team of Subject
     Matter Experts (SMEs) knowledgeable in the areas of functional
     data collection or analytical processes that will culminate in the MR
     Metric for a particular Weapon System

f.   Policy and Doctrine versus Warfighter PBA: What policies may
     be affected concerning this metric?

     Policies governing Unit Status Reporting are covered in ARs 220-1
     and 700-138, Army Logistics Readiness and Sustainability, which
     provide all the information required to gather and manage the data
     for readiness reporting. Other logistics policies and procedures
     (e.g., AR 750-1, Army Materiel Maintenance Policy, and DA PAM
     738-750/751, Functional User’s Manual for the Army Maintenance
     Management System (TAMMS)/-Aviation (TAMMS-A)) should be
     considered in developing the Warfighter PBA. PBL PBAs should
     not conflict with a unit’s compliance with Army regulations.

g.   Data Review Boards (DRBs): Who will evaluate the metric?

     It is likely that some data review will be necessary to scrub raw data
     findings and compute real values. The frequency of this should be
     dictated by PBL strategy. The raw data should be reviewed for
     clarity, completeness and accuracy as close to the point of time of
     incident as possible. The DRB would check to make sure that data
     coming in is consistent with the standards established and tailored
     for the particular application of the MR metric against the Weapons
     System in question.

     An additional DRB function could be reviewing multiple measures.
     These could be be multiple measures of performance or multiple
     capabilities that must be operations or procedures that can be
     calculated against what is representative of “meeting mission
     success objectives.” Performance data collection and review
     methodology is under the purview of the PM/PSI. SIPT members
     are candidates for DRB membership. ATEC (including DTC, OTC,
     and AEC) should also be given the opportunity for membership on
     the DRB. If the quality of the data used to calculate the metrics is
     questionable or if collection methods have proven to be unreliable
     in the past, then a DRB should be convened regularly. The
     participants and allowable data transformations shall be clearly
     defined in the PBA and approved by all stakeholders.

h.   Early Determination of System Definition and Usage Factors:
     Are there caveats/ concerns for specific systems?




                               28
     A system could be a single end item, a network, a group of like end
     items or some other warfighting operational combination. This is
     agreed upon early on in the life cycle. Depending on the definition
     of the systems, the reliability metrics could be widely different.
     What constitutes the mission of the system also needs to be
     defined and agreed upon early on to allow the establishment of
     reliability metrics. The factors that affect the mission (e.g., amount
     of time between missions allowing repairs or Preventive
     Maintenance Checks and Services (PMCS), maintenance ratio,
     etc.) also need to be defined. The types of failures that are
     considered in the MR performance need to be defined.

i.   Accounting for Model and Configuration Differences:

     1.     State if there are different configurations in the field or
            that are planned to be fielded.

            As systems progress through the life cycle, multiple
            configurations are inevitable. Each configuration can and
            will have different reliability characteristics. It is important to
            identify unique models through the assignment of the End
            Item Code (EIC).

     2.     What impact do different configurations have on data
            being collected, reported and/or retrieved?

            For the purpose of metrics tracking for PBL, the STAMIS
            systems are capable of identifying and reporting
            supportability data for most fielded weapon systems. To
            ensure this capability for all weapon systems in the future,
            PMs should assign unique EICs for every different model
            configuration fielded.

j.   Sample Data and Extrapolation versus Total Population: When
     is it feasible to use sample sizes versus total population?

     Gathering sample data is worthwhile when measurement
     techniques do not lend themselves to the population of data (i.e.,
     total performance). This would require a statistical extrapolation in
     formula output to determine what the value represents. This is
     equivalent to Sample Data Collection programs. The risk is
     determining whether the “sample” is truly representative of what
     performance will be or is across the deployed population. If we are
     measuring a partial deployment of a weapons system (e.g., less
     than the total density after all fieldings are completed), then a




                                 29
          population measure may still be appropriate if it reflects the
          performance expectation.

     k.   Funding Resources:

          1.     What impact will collecting data have on funding?

                 Under a PBL strategy, the funding stream for AMC OMA will
                 change. AMC will not get the OMA funding for the systems.
                 Since this funding will go to the PMs, it is a different funding
                 stream. Every PBA candidate must be reviewed on a case-
                 by-case basis to evaluate funding responsibilities and
                 requirements. The PBA is directly related to whatever the
                 PM wants to purchase. In the generic sense, if the effort is
                 sustainment-related, then the proper funds (i.e., OMA)
                 should pay. If it is an investment-type effort (e.g., testing,
                 engineering services, modification, or initial spares), the
                 proper investment funds (i.e., procurement or RDTE) should
                 be used.

          2.     How will the collection, reporting, retrieval, and
                 maintenance of data be funded?

                 Even though AMC is interested from the LCMC perspective,
                 it would not be tracking execution. In cases where the PM is
                 providing the funding, the PM/PSI would need to collect data
                 and track execution. If data collection is for sustainment
                 efforts, the sustainment funds would pay. If it is for
                 investment efforts, then investment funds would pay. The
                 owner of the metric and whoever requests and/or requires
                 the data should be the one funding the costs. The
                 arrangement of how the funding is administered and
                 executed should be defined in the PBA.

F.   Implementation Concerns

         To fully gain sufficient data for the MR Metric for all of its different
          potential applications, it will be important to determine whether the
          gaps between the current STAMIS and policy/procedures and
          functional use of Unique Identification/Automatic Identification and
          its policy/procedures can be bridged.
         The institutional nature of self-reporting by field units as well as
          other organizational entities is a challenge. This may include
          aspects (e.g., failures) being attributable to accidents or improper
          use, lack of preventative maintenance, or failure to collect accurate
          operating hours and conditions.


                                     30
            The ability to measure and gain accurate information and review
             data circumstances (e.g., use outside mission constraints like
             temperature range, abuse, and unqualified operators) from field
             sources without undue labor may be a constraining factor in
             applying this metric.
            Some factors bearing on product support providers’ performance
             assume that maintenance is performed according to prescribed
             serviceability standards. This is a measurement challenge and
             potentially a constraint on its application.

G.    Summary

       Mission Reliability is a key but complex metric. It requires advanced
planning and methodical implementation to become a successfully applied metric
in a PBL Strategy.




                                      31
                      Section C – COST PER UNIT USAGE

A.     Concept

        The objective is to collect all operating and support costs data and
elements at the lowest level (e.g., unit level or end item level) required to
maintain and sustain a weapon system while recognizing data collection and
supporting element limitations. Collecting data to the lowest level of detail will
incur significant resources in labor and automated systems designed for
collection. By collecting all data elements at the lowest level, one can determine
the cost drivers and focus on those costs that are the primary discriminators for
that particular system. This will allow a focus on PBL Cost/Usage Discriminators
for the PBA.

        The CJCSM mandates Ownership Costs as a KSA. Ownership Cost
provides balance to the Sustainment solution by ensuring that the Operations
and Support (O&S) costs associated with materiel readiness are considered in
making decisions. Ownership Costs varies from the Cost Per Unit Usage metric
in that only the following cost elements are required for the KSA: 2.0 Unit
Operations (2.1.1 (only) Energy (fuel, petroleum, oil, lubricants, electricity)); 3.0
Maintenance (All); 4.0 Sustaining Support (All except 4.1, System Specific
Training); 5.0 Continuing System Improvements (All). The Cost Per Unit Usage
supporting elements are listed below in Section D.

Cost Per Unit Usage should discretely capture costs most relevant and
applicable to the operations and maintenance of the particular system. The
Operating and Support (O&S) cost factors are detailed in section D. These costs
will be measured against a usage factor such as miles, hours, rounds, etc. to
develop a Cost Per Unit Usage. It is also important to be able to distinguish the
costs of systems deployed versus those engaged in normal peacetime training.

B.     Definition and Formula

       a.     Definition

              The total Operating and Support costs, to include overhead and
              management costs, for a weapon system usage attributable to a
              given unit of usage under established conditions. Usage can be
              measured in terms of unit density or individual weapons system;
              usage factors include miles, rounds, launches, flight hours, time,
              systems, etc.

       b.     Formula

                        Total Operating & Support Costs
                       Miles/Rounds/Launches/Flight hours



                                         32
C.    Considerations

      a.     Advantages

                    Opportunity to discover cost drivers. Collecting data at the
                     lowest level of detail, will allow visibility of the main cost
                     drivers.
                    Visibility into Wartime/Peacetime impacts. This capability
                     will allow further analysis of potential war stress on
                     equipment, RESET costs, and increased OPTEMPO impact
                     on the economic useful life of a weapons system.
                    Improve Cost Estimating for future systems.
                    Better Data to support Surge and Deployment.

      b.     Risks

                    Costs and resource burden to collect accurate costs to
                     lowest level (e.g., unit level of end item level).
                    Wartime data collection problematic not a Theater Command
                     priority.
                    Difficult to distinguish Active Army versus Reserve and
                     National Guard and systems in Float or Prepositioned
                     Status.
                    Technology Insertion/Configuration Control and relation to
                     sustainment costs.
                    Cost Element Collection for Other Service owned Systems
                     where Army is Executive Service/TLCSM.

      c.     Assumptions

                    Requirement and specification for data collection will be
                     written into initial Contract for new systems.
                    Cost collection techniques, collected data, and actual costs
                     per activity are accurate and valid.
                    Can distinguish between costs associated with wartime and
                     peacetime; e.g., collect cost data of deployed units
                     separately from units engaged in peacetime training.

D.    Supporting Elements

       Must capture all operating and support costs at the lowest level (e.g., unit
level or end item level) to maintain and sustain a weapon system and will include
costs most relevant and applicable to the operations and maintenance of the
particular system. This measure may include operations and maintenance


                                        33
personnel, consumables and repair parts, POL, depot-level reparables, training
munitions, field and sustainment maintenance labor and materials, contractor
support labor and materials (to include warranty support), support equipment,
modification kits, engineering support, software support, simulator operations,
facilities, packaging, handling, shipping, transportation and other indirect costs
such as disposal and training support as applicable. These costs will be
measured against a usage factor such as miles, hours, rounds, etc.

             Number of Systems: Density or number of Systems at fleet level,
              variant, end item or unit level as applicable to each PBA. For each
              PBA, the scope of the cost collection should be determined up
              front.

             Usage Factors: Usage Factors include such data as miles, rounds,
              hours of usage and profile of use such as off road/on road, regional
              factors or environmental factors (i.e. hot/humid etc.)

             Operating Costs: The costs associated with personnel and
              materials necessary to meet combat readiness, unit training and
              administrative requirements, exclusive of maintenance and
              deployment costs. This cost includes such factors as fuel, or other
              operating consumables. Must be determined if operating crew
              manpower and personnel is a discriminator for PBL.

             Maintenance Costs: Costs exclusive of spare and repair costs
              attributable to maintenance actions. Include such costs as
              applicable for Scheduled Maintenance, Field Maintenance and
              Sustaining Maintenance (to include labor, materiel, and overhead
              costs that support the replication, distribution, installation, training
              and maintenance of software). Exclude those costs accounted for
              in other cost categories such as repair of an LRU to return to stock.
              Costs can be accumulated to provide an actual cost accumulation
              or to determine factors that can be applied to frequency of
              maintenance. On a case-by-case basis, it can be determined if
              Support and Test Equipment costs should be included in overall
              maintenance cost. Distinguish between costs associated with
              development change (non-recurring) as opposed to recurring costs.
              In cases where there is either Contractor Logistics Support or there
              is a mix of organic and contractor maintenance being performed the
              contractor repair costs will need to be captured.

             Frequency of Maintenance: This is the measure of frequency of
              maintenance actions such as mean time between maintenance
              actions or removals so that frequency may be applied to a
              maintenance cost factor. This may be useful if overall cost



                                         34
    collection is impractical or subject to inaccuracy and a factor such
    as a standard charge for maintainer time provides a better metric.

   Spare and Repair Parts Cost: Alternative pricing methods include
    using standard or exchange pricing to account for all costs
    associated with providing those spare and repair parts or including
    individual elements of supply chain costs such as acquisition costs,
    item management costs, and other types of surcharge. Must be
    sure to designate whether it is cost to repair, cost to put spares on
    shelves (to include inventory holding costs, bin costs, etc.), cost of
    a new item, etc.

   Facilities Cost: Determine those facilities costs that are
    discriminators in sustainment. This should include only those
    personnel and costs directly affected by a change in the number of
    systems to include base operating support and property
    maintenance. For example, there may be an overhead or an
    investment cost of providing mobile and/or fixed facilities (such as
    for overhaul or special storage) or expense costs for maintaining
    such capability.

   Recurring Operating Costs: Other costs of operating system
    exclusive of maintenance, supply and initial deployment costs. This
    includes such factors as fuel or other operating consumables and
    considers costs for personnel operating the system or supporting
    the program.

   Initial Deployment Costs: Other costs of operating system
    exclusive of maintenance, supply and recurring operating costs.

   Indirect and Other: This includes other costs such as disposal or
    demilitarization costs incurred during the Operations and Support
    Phase of the Life Cycle. Also included are indirect costs such as
    training and technical assistance costs needed to maintain skills.
    Costs of operating the training base may be factored into
    manpower and personnel costs as overhead or accumulated
    directly where they may be considered a performance based cost
    discriminator.

          Disposal Costs: Costs associated with Demilitarizing and
           Disposing Equipment. A negative cost is a salvage value.

          Non-logistics Costs: Costs that support the development
           and production of equipment.




                              35
               Costs influenced by Operational Availability (Ao),
                Reliability, Maintainability, and Supportability: Costs that
                directly influence Level of Repair Analysis (LORA)
                maintenance and supply sparing mix decisions to minimize
                support costs to an Ao goal. This excludes those costs that
                do not change when maintenance and supply support
                concepts vary.

E.   Implementation

     a.   Early Planning for Data Collection:

          1.    Where does this metric fit into the life cycle?

                This metric fits into every phase of the life cycle, from
                concept development to disposal.

          2.    When should data be collected for this metric?

                Operating and support costs should be scrutinized from the
                start as they have implications across the entire life-cycle of
                the system under consideration.

          3.    What impact would not collecting this data have on the
                weapon system, Product Support Integrator (PSI),
                Provider, PM, customer, etc.?

                The impact of not collecting this data includes: no visibility of
                cost drivers; no back-up data for cost reduction efforts; and
                an inaccurate and incomplete Life Cycle Cost Estimate.

          4.    What type data should you collect in each phase of the
                life cycle?

                Projected costs should be collected at every milestone prior
                to initial fielding; actual costs should be collected within a
                year of initial fielding.

          5.    What impact will early data collection have on peace and
                wartime scenarios?

                Data collection will facilitate estimating costs of wartime
                versus peacetime.

          6.    What data is required to evaluate this metric?




                                    36
           Data required to evaluate this metric includes force structure,
           mission readiness, quality targets, reliability, and operational
           availability.

b.   Data Source and Limitation:

     1.    Identify who is responsible for collecting and reporting
           the data.

           From the start, the PM should develop a map of who will be
           providing support during every phase of the life cycle; e.g.,
           contractor or organic support only, combination of organic
           and contractor support, etc.

     2.    What mechanism will be used to collect, report, retrieve,
           and maintain data for this metric?

           The mechanism for data collection may include STAMIS,
           manual reporting, automation (to include AIT, memory
           buttons), sample data collection or Field Exercise Data
           Collection. The type of mechanism is dependent upon
           population size, relative cost of support and resources
           available. Standardization of methodology and mechanism
           for collection is essential to ensure accuracy and
           completeness of data.

     3.    How reliable is the data?

           Use of automation (vice manual reporting) should improve
           the reliability of the data.

c.   Burden on Field Units:

     1.    Identify if the Soldier is going to be required to collect
           the data.

           Additional burden on the Soldier to collect data should be
           minimized.

     2.    Will the collection of this data be under STAMIS or a
           stovepipe system?

           Most organic support data can be retrieved through STAMIS;
           however, contractor support data must be written into the
           initial contract.




                               37
     3.     Will the data collected be transferable via system to
            system, PM to PM, etc.?
            As the use of automation increases, the level of data
            reliability should also increase. Data collection methodology
            should be consistent from system to system as well as from
            PM to PM.

d.   Automated Sources/Automated Data Recording:

     1.     Can data be automatically collected, maintained, and
            retrieved? What type of data collection device will be
            used?

            The extent to which automation will be used is dependent
            upon resources available to cover start-up costs as well as
            operation and maintenance of automation. Data collection
            devices may include unique ID technology, bar-coding,
            marking or memory buttons.

     2.     State differences and errors that may be detected during
            automatic retrieval or recording?

            Automated reporting should be more reliable than manual
            recording; however risks may include impact of
            modifications, maintenance, and security issues related to
            transmitting data in wartime conditions.

     3.     When can the data be automatically obtained?

            Data should be evaluated across several levels to include
            strategic, tactical, and operational.

e.   Negative Analysis: State the impact, on all parties concerned,
     if the proper data is not collected at the right time, by the right
     person (or system).

     If the data collection is not timely, it will have an adverse impact on
     budget planning, execution analysis, and cost reduction studies.

f.   Policy and Doctrine versus Warfighter PBA: Are there existing
     regulations/policies/doctrine that contains more metric
     specific information/ guidance?

     All PBL actions taken should be within regulation. It is imperative
     that development of policy and doctrine be in place before actual
     implementation. PM must document changes in policy at the


                                38
     earliest stages and look for opportunities to improve current
     doctrines.

g.   Data Review Boards: Who will evaluate the metric?

     A review board will evaluate. This board should include
     representatives from HQDA G-4, G8, CE, AMC G4, G8, LOGSA,
     PM G8, PM QSA/QC, PM Engineering, Logistics, RM, and other
     representatives from the costing community. ATEC (including DTC,
     OTC, and AEC) should also be given the opportunity for
     membership on the DRB. They should be established in order to
     ensure that goals are being met. They should initially meet as one
     year of actual data is available following initial fielding as part of the
     System Readiness Review (SRR) process.

h.   Early determination of System Definition and Usage Factors:

     1.     Are there caveats/concerns for specific systems?

            The challenge of data collection for specific systems, such
            as communications or aircraft systems, may be increased
            due to swiftly changing technology.

     2.     Are there system of systems issues that need to be
            addressed?

            For system of systems (e.g., Future Combat Systems (FCS),
            it is imperative that data be collected at the lowest level in
            order to identify data unique to each component of the
            system. Another challenge is that one unique usage factor
            may not apply to all components within a system or system
            of systems.

i.   Accounting for Model and Configuration Differences: What
     impact do different configurations have on data being
     collected, reported and/or retrieved?

     Configuration/modifications should be mapped out at the earliest
     stage and should identify its impact on data collection.

j.   Sample Data and Extrapolation versus Total Population: When
     is it feasible to use sample sizes versus total population?

     The decision of whether to use sample data collection vice real
     data is dependent upon the size of the population, the relative cost
     of support, and the level of effort involved in collection.


                                 39
      k.     Funding Resources:

             1.     What impact will collecting data have on funding?

                    A robust and accurate data collection effort will result in
                    more accurate programming estimation and a higher
                    confidence level in budget requests.

             2.     How will the collection, reporting, retrieval, and
                    maintenance of data be funded? Who is responsible?

                    The decision on who will fund the data collection effort
                    should be determined at the earliest stage and should be
                    written into the initial agreement, to include every phase of
                    the life cycle.

F.    Implementation Concerns

            Availability of financial and manpower resources necessary for data
             collection and validation
            Limitations, functional and technical, of the current Standard Army
             Management Information System (STAMIS)
            The level of data that can be procured through the Contractor for
             Fixed Price Contracts
            Wartime measurement challenges
            Availability of cost estimates for all Operating and Support (O&S)
             elements
            The nature of the restrictions in obtaining Proprietary Data may limit
             in obtaining contractor internal costs
            The institutional nature of self-reporting by field units as well as
             other organizational entities is a challenge

G.    Summary

       All operating and support costs incurred must be identified and recorded
to maintain and sustain a weapon system. The cost elements should be discrete
enough and described in writing to such a level of detail to form a basis to
establish cost drivers and output products.




                                        40
                      Section D – LOGISTICS FOOTPRINT

A.     Concept

        The logistics footprint metric is a composite metric impacted by the other
four overarching metrics: operational availability (Ao), mission reliability (MR),
logistics response time (LRT), and Cost Per Unit Usage. It measures the area,
volume, weight, and personnel of the total logistics support required to move,
maintain, and sustain any warfighting force. The objective is to right-size the
logistics support necessary to sustain an effective operational force. Logistics
footprint should be evaluated throughout the life cycle to ensure that changes or
revisions can be identified and implemented to allow for timely improvements to
meet performance requirements.

       At a minimum, measures of performance include: inventory/equipment,
petroleum, oil, lubricants (POL), parts and software support, personnel, facilities,
and transportation. In order to use Logistic Footprint as a metric in a PBA, a
baseline must be established.

        The preferred approach is to collect all data and elements at the lowest
level (e.g., unit level or end item level) with the focus on Logistics Footprint
components (e.g.: Spares, Tools and Test Measurement and Diagnostic
Equipment (TMDE), Mobile Facilities, Consumables, etc.) applicable to a weapon
system. This may include that which is applicable to Unit operation, Unit
maintenance, and supply support organizations. By collecting all data elements
at the lowest level, one can determine which elements impact logistics footprint
and are therefore primary discriminators for that particular system.

B.     Definition and Formula

       a.     Definition

              The government/contractor size of logistics support required to
              deploy, sustain, and move a weapon system for a given mission
              profile. Measurable elements should include but not be limited to:
              inventory/equipment, personnel, facilities, transportation assets,
              supply, and real estate. Measures should quantify the footprint,
              i.e. weight, area, volume, and personnel etc. as appropriate.

       b.     Formula

              Logistics Footprint encompasses a wide variety of elements that
              having one specific formula will not envelop the entire embodiment
              of logistics support. However, each element can be quantified,
              measured, and assessed individually. These individual




                                         41
          assessments can then be integrated as an overarching logistics
          footprint analysis.

          Logistic Footprint is a function of various elements to include area
          (a), volume (v), weight (w), and support personnel (sp)

          Where:

          a = area required for supplies and spares plus maintenance area

          v = the packaged volume of sustainment (cubic feet) needed to
               support the system to meet its operational and reliability
               requirements

          w = the weight (pounds) of supplies and spares shipped

          sp = the number of support personnel required to support the
              system

C.   Considerations

     a.   Advantages

                 The Logistic Footprint metric should be used in the design
                  and implementation of the development, fielding, or post
                  fielding of a system.
                 The opportunity exists to discover logistics footprint drivers.
                  Data collection to the lowest level of detail will allow visibility
                  of the main drivers.
                 Visibility into Wartime/Peacetime impacts may allow for
                  analysis of potential footprint requirements for support of
                  equipment as well as the consequence of increased
                  Operational Tempo (OPTEMPO) impact.
                 Improved requirements that may influence design
                  engineering practices for future systems can impact logistics
                  footprint, i.e. Reliability, Availability, and Maintainability
                  (RAM), Manpower and Personnel Integration (MANPRINT)
                  and Packaging Handling Storage and Transportation
                  (PHS&T).
                 Models and data are available that can be used provide
                  realistic trade-off analyses among alternative support
                  structures (total Contractor Logistics Support (CLS), organic
                  support or best blend of support, etc.).

     b.   Risks



                                       42
                    Changes in system reliability and maintenance
                     characteristics impact many logistic footprint variables.
                     Reductions in reliability have the potential for a detrimental
                     impact on maintenance requirements, maintenance
                     personnel, system availability, parts required, and
                     operational performance.
                    The availability of timely, accurate, and comprehensive data
                     is critical to the establishment of a baseline and development
                     of performance metrics. Availability, accuracy, quality, and
                     magnitude of data required to produce a baseline and
                     collection of current data may be constrained.
                    Since logistics footprint reduction encompasses many
                     organizations and processes, it may require extensive time
                     to perform adequate assessments.
                    The inability to measure completely or attribute the footprint
                     directly to a specific weapon system due to mission or
                     scenario related variations may exist.
                    The optimal support structure for a weapon system may be
                     considerably different during peacetime versus wartime, but
                     there is no allowance for planning alternative support.

       c.     Assumptions

                    Adequate data is readily available to determine performance
                     based on the metric.
                    Spiral development can either increase or decrease logistics
                     footprint.
                    Design for logistics footprint reduction will not adversely
                     affect operational performance. Any increase or decrease in
                     logistics footprint should result in significant impacts to other
                     metrics, cost, schedule and performance.
                    Operational baseline exists to initiate modeling and
                     simulation. (The ability to validate previous modeling
                     assumptions exists.)
                    Determination of the most efficient and effective logistics
                     support structure for a given weapon system must consider
                     any limitations imposed by mission scenario and/or materiel
                     need.

D.      Supporting Elements. The logistics footprint includes, but is not limited
to, the following: Design, Reliability and Maintainability, Personnel, Training, and
External Factors.

       1.     Design

              As early as possible, and before a formal program is established,


                                         43
actions must be identified that are necessary to achieve a
significant increase in reliability which will result in decreases in
logistics footprint. These actions should be identified as a part of
the technology maturation process prior to and during Concept
Refinement and Technology Development phases. While
considered pre-acquisition, these efforts are critical to achieving
improved system sustainment.

Once an item completes development and begins fielding, changes
to improve performance also have the potential to impact, either
increase or decrease, logistic footprint. The areas discussed below
should be given special attention when contemplating design
changes. This is not a complete list. A complete evaluation of the
impact on logistics footprint, from the strategic to the tactical level,
should be accomplished before any design changes are
implemented.

      Test Measurement and Diagnostic Equipment (TMDE):
       The tools and TMDE should be selected to ensure that the
       maximum use of commonality. Ensure that there are no
       similar tools or TMDE in the Army Inventory prior to
       developing new items.

      Spiral Development: Spiral Developments can be used to
       reduce the logistics footprint. The areas that can be
       influenced include improving reliability, enhancing Built-In
       Test/Built-In Test Equipment (BIT/BITE), and reducing
       component dimensions and power requirements.

      Design for Logistics Footprint reduction: The footprint
       can be reduced by including the Logistics community early
       on in the systems design process. The Logisticians can
       influence maintenance, Mean Time to Repair (MTTR),
       reliability, and help in the establishment of Life Cycle Cost
       (LCC) Estimates.

      Logistics Modeling and Simulation (M&S): Logistics M&S
       aids in providing decision makers with the optimal solution of
       the logistics footprint by allowing PMs/PSIs to evaluate
       support alternatives without large financial or time
       investments. M&S will also allow for logistics support
       decisions to be made early on in the programs life cycle.

      Open Architecture: The use of an open architecture will
       allow for reduced footprint by supporting modernization
       through technology insertion and spares.


                           44
          Physical Dimensions: The specific weight and cube of
           components, support equipment, and supplies required to
           maintain and sustain a specified unit for a specified period of
           time.

          Power Requirements: The specific amount of power
           generation/energy storage equipment or items necessary to
           meet a units power demands, to include fuel requirements,
           for a specified period of time.

          Commonality of Components: A purposeful effort to
           maximize the use of common spares, assemblies, tools and
           ancillary equipment within a given platform, as well as other
           items of equipment/vehicles and other DOD Services.

          Single Fuel: Comply with DOD single fuel directive that all
           equipment operate on kerosene based fuels.

2.   Reliability and Maintainability

     Changes in system reliability and maintenance characteristics
     impact many logistics footprint variables. Reductions in reliability
     have the potential for a detrimental impact on maintenance
     requirements, maintenance personnel, system availability, parts
     required, and operational performance. However, increases in the
     reliability and maintainability characteristics will have positive
     impacts on the primary system or System of Systems (SoS)
     logistics footprint. These impacts will be improved Ao, increased
     operational reliability, and reduced operating and support costs.

          Failure Factor (FF): The average number of critical item
           demands or removals per 100 end items per year.

          No Evidence of Failure Rate (NEOF): A measure of false
           pull removals causing item demands when a failure did not
           occur to the item. This is a function of fault diagnosis and
           maintenance impacted by BIT/BITE, TMDE and TM repair
           procedures.

          Maintenance Ratio: Measurement of man-hours per
           system operating hour for actions using forward support
           level manpower to perform corrective hardware
           maintenance, software updates, servicing prior to & after
           missions, scheduled/preventive maintenance, and system



                               45
           set up/tear down, etc.

3.   Personnel

     Capability to successfully operate, maintain and repair a specific
     component in a specified time at the lowest level without increasing
     personnel (if possible) or creating requirement for new skills.
     System design determines personnel requirements. This includes
     the number of personnel required to operate, maintain, and repair a
     system/component in a specified time at the lowest level without
     increasing personnel strengths (if possible) or creating
     requirements for new skills or Military Occupational Specialties.

          Number of Operators: Quantity of Personnel required to
           operate a system. Operators are typical the biggest O&S
           cost drivers of platforms and less operators reduces the
           logistics footprint

          Number of Maintainers: Quantity of trained maintainers
           required to maintain a system once fielded. Field Level
           Maintainers needed per Unit to support the system as less
           maintainers reduces the logistics footprint

          Transportability: The number of days to transport the
           system's equipment and its associated personnel in the Unit
           using specified aircraft or vehicles for initial deployment to
           the Area of Operations or for sustaining operations after
           deployment

4.   Training

     Training impacts the Logistics Footprint with requirements for
     space, facilities, training personnel and equipment, billeting,
     transportation to and from the training site, power for training
     equipment, and students being pulled away from their duties
     (requiring replacements for the students). Embedded training
     minimizes all of the aforementioned factors plus it may reduce the
     time the Warfighter is not available to perform mission.

     Technical Manuals (TM): In order to read a TM, an electronic and
     interactive communications system is required. The hardware
     consumes valuable space and power sources to operate, therefore
     impacting the Logistics Footprint. TMs also require personnel and
     parts for repair and maintenance.

5.   External Factors


                               46
               Materiel Handling Equipment (MHE): Reduce the amount
                of MHE required to deploy, employ and sustain a system by
                using configured loads. The systems must be designed to
                use standard MHE.

               Facilities: When possible, use existing facilities for
                maintenance, storage of spares, and special use.

               Transportation: When possible, support the end item with
                existing forms of transportation and use standard tracking
                systems.

               Diminishing Manufacturing Sources: Integrate incentives
                into the Industrial Base to support weapon system into the
                out years, and to value engineer new replacement parts.

               Density: Support low density vehicle systems with CLS
                arrangements with PBL incentives for performance Ao.

               Facilities Set Up Time: Time to set up facilities for use
                when facilities are not in place.

E.   Implementation

     a.   Early Planning for Data Collection:

          1.    Where does this metric fit into the life cycle?

                This metric fits into the entire life cycle of a system. It begins
                at the Concept Design Phase by assessing a range of RAM
                requirements using logistics M&S and their effect on the
                logistic footprint elements. It is critical that a Level of Repair
                Analysis (LORA) be performed as early as possible in the
                system development phase. The results of the LORA will
                support the Business Case Analysis (BCA). As the system
                matures, the LORA should be updated to reflect system
                design changes and more accurate data. This allows the
                user and sustainer community ample opportunities to adjust
                requirements in order to achieve the optimal footprint while
                maintaining the highest level of system performance as
                possible.

          2.    When should data be collected for this metric?




                                    47
           Data should be collected at all phases of the life cycle of the
           system. During Concept Design, contractor estimates
           should be collected and applied to logistics M&S to initiate
           assessments of system requirements and for planning and
           budgeting of support for the system. Early data collection
           will also serve the purpose of determining a baseline
           logistics footprint for the system of SoS. During the
           production phase, contractor estimates should be refined to
           accurately reflect testing results and reassessed to make
           any adjustments to the logistics support as necessary. Once
           the system is fielded, the PM should determine sample
           population size for data collection to adequately compare to
           the estimate.

     3.    What type data should be collected in each phase of the
           life cycle?

           The data required to evaluate this metric includes but is not
           limited to engineering estimates for area, volume, weight and
           personnel required to sustain the system. It is imperative
           that this data be collected or estimated early in the life cycle
           in order to establish a baseline footprint.

b.   Data Source and Limitation:

     1.    Who is responsible for collecting and reporting the
           data?

           The primary sources of data are the unit. Deployment data
           may be collected by the unit, contractor, Program Manager,
           user schools, tester, other government agencies or the
           evaluator.

     2.    What mechanism will be used to collect, report, retrieve,
           and maintain data for this metric?

           Data acquisition should be non-invasive and should
           discretely capture those logistics footprint elements most
           relevant and applicable to the operation and maintenance of
           the particular system. Data collection should be addressed
           in the Request for Proposal and the final contract to ensure
           data is captured throughout the entire life cycle of the
           system. Automated or established data collection systems
           should be used where possible.




                               48
c.   Burden on Field Units: Identify if the Soldier is going to be
     required to collect the data.

     Some data elements may have to be gathered by units, either
     during war or peacetime. If the support is organic, or only the
     maintenance is contracted, the supply data can be obtained from
     ULLS, a STAMIS. The Sustainment deployment data will be
     collected by the unit, contractor, or data collectors as they deploy.

d.   Automated Sources/Automated Data Recording:

     1.     Can data be automatically collected, maintained, and
            retrieved?

            Some data sources, as discussed above, are automated.
            Automated or established data collection systems should be
            used where possible.

     2.     State differences and errors that may be detected during
            automatic retrieval or recording.

            Data collected at the lowest level may incur significant
            resources in labor, costs, and automated systems.

e.   Negative Analysis: State the impact, on all parties concerned,
     if the proper data is not collected at the right time, by the right
     person (or system).

     Incorrect data will lead to incorrect assessment of the logistics
     footprint and can affect funding, personnel, and system acquisition.
     It is imperative that a baseline logistics footprint be established as
     early as possible in the development cycle of the system. If a
     baseline is not established, it will be impossible to determine the
     magnitude of the impact on the logistics footprint caused by
     changes to the systems support concept.

f.   Policy and doctrine: Are there existing regulations/policy/
     doctrine that contains more metric specific information/
     guidance?

     See DoD 5000.1, chapter 5.2.1.1; AR 700-127

g.   Data Review Boards: Who will evaluate the metric?

     The PM will determine its suitability for the PBA and evaluate the
     metric. The data review board and membership are established by


                                49
     the PM. Board members will include, but not be limited to: the PM,
     user, and CASCOM. ATEC (including DTC, OTC, and AEC)
     should also be given the opportunity for membership on the DRB.

h.   System Definition and Usage Factors:

     1.    Clearly identify the usage factors for this metric.

           Usage factors are system dependent.

     2.    Are there system of systems issues that need to be
           addressed?

           System of systems considerations will be made if systems
           are dependent on other systems, contractors, or Soldiers.

i.   Accounting for Model and Configuration Differences:

     1.    State if there are different configurations in the field or
           that are planned to be fielded.

           System technical improvements will affect logistics footprint,
           i.e. advanced technologies may require additional training,
           revised TMs, or special tools, but potentially less
           maintenance.

     2.    What impact do different configurations have on data
           being collected, reported and/or retrieved?

           Different configurations will impact data collection efforts by
           the addition of another series of variables to report by the
           collector; the solution maybe as simple as including a
           Vehicle Identification Number (VIN) or National Stock
           Number (NSN). In wartime deployments, system
           configurations can change based on the scenario. This will
           increase the burden of collecting accurate data. Potentially,
           there will be different cost drivers which may lead to a
           different focus on data elements.

     3.    Are all configurations being reported in the same
           manner?

           All configurations should be reported in the same manner.
           However, this will require significant management
           configuration control over who is collecting the data and how




                               50
                 it is being collected. Consistency in reporting allows viable
                 comparisons to identify cost drivers and track performance.

     j.   Sample Data and Extrapolation vs. Total Population: When is
          it feasible to use sample sizes versus total population?

          The use and size of sample data is system and deployment
          dependent. Recommend using a sample size instead of the total
          population for ease of collection and reduced cost. Sample size
          needs to be established in the PBA. The sample size should be
          statistically significant to allow for meaningful extrapolation across
          the total population.

     k.   Funding Resources:

          1.     What impact will data collection have on funding?

                 If a system is contractor supported, recommend the
                 contractor collect all or most of the data. The PM or User
                 will be required to determine the accuracy of the data that is
                 being collected. This will add slightly to the cost of the
                 support contract. The amount of funding will influence the
                 level of data collected. There may be a cost and resource
                 burden to collect accurate subordinate metrics for operation
                 and downtime to the lowest level.

          2.     Who is responsible for funding the data collection
                 effort?

                 The PM is responsible for funding the data collection effort
                 for contract support.

F.   Implementation Concerns

         Must consider any limitations imposed by the mission scenario.
         A validated, verified and accredited logistics footprint model may be
          required.
         Transportation options may be severely limited/unpredictable in a
          wartime scenario, and this factor may contribute to an increase in
          footprint.
         Storage limitations in severe environments that exceed the tested
          reliability requirements may greatly increase the logistics footprint
          by adding unplanned and unfunded control methods, unscheduled
          maintenance, surveillance efforts, and/or facilities.
         Some units will have to provide data to verify contractor input.



                                     51
G.     Summary

       The logistics footprint metric is a composite metric that is heavily impacted
by the other four overarching metrics. A baseline logistics footprint needs to be
established for the system as early as possible in the development cycle.
Logistics footprint should be considered during all life cycle phases for all
systems. It should also be periodically reviewed throughout the entire life cycle
of the system for adjustments and assessments to verify achievement of the
optimal logistics footprint without degrading performance.




                                        52
                   Section E – LOGISTICS RESPONSE TIME

A.     Concept

      The Logistics Response Time (LRT) metric applies to stakeholders, i.e.,
Government, contractor, military, academia, Product Support Providers (PSP)
and Product Support Integrators (PSI), etc. It is an indication of the timeliness of
support provided by logistics processes. LRT is a DoD recognized and Army-
wide metric that recognizes that war-fighter support requirements have the
highest priority.

B.     Definition and Formula

       a.     Definition

              Logistics Response Time is the period of calendar time from when
              a failure/malfunction is detected and validated by the maintainer to
              the time that the failure/malfunction has been resolved. This
              includes: the time from when a need is identified until the provider
              satisfies that need, all associated supply chain and maintenance
              time, and delivery times of parts.

       b.     Formula

              LRT = Date (or time) of satisfaction of the logistics demand -
                    Date (or time) of issue of logistics demand

C.     Considerations

       a.     Advantages

                    Visibility into Wartime/Peacetime impacts can allow further
                     analysis of potential LRT; Wartime deployment of equipment
                     and increased OPTEMPO impacts the Logistics Response
                     capability
                    Better LRT data will support management of deployments
                     and equipment surges.
                    LRT data will assist in the development of sustainment
                     strategies for future systems.
                    LRT can be used to:

                          measure contractor responsiveness;
                          identify bottlenecks in the pipeline;
                          identify the validity of data capturing resources;
                          establish firm definition of authoritative and validated
                            data sources.


                                         53
            Data collection at the lowest level will allow visibility of the
             main LRT drivers.
            Understanding design related LRT drivers may influence
             how Reliability, Availability, and Maintainability (RAM) design
             engineering improvement practices are applied to future
             systems.

b.   Risks

            Resource constraints may prevent the introduction of
             technological changes or advancements.
            Holistic and institutionalized enterprise approach and
             standards could impede the ability to document and justify
             resource allocations required to support programs in the
             short-, mid-, and long-term.
            Cost and impact assessments should be conducted prior to
             modularity alignment and before other changes are
             implemented.
            To ensure logistics readiness capability, information
             technology, communications infrastructure, configuration
             management, and control (classified and unclassified) must
             be in place.
            Reporting sources are responsible for fixed frequencies,
             validated sources, and automated links. Reporting
             processes need to be automated and linked to the greatest
             extent possible to both current and future logistics enterprise
             systems.
            The Single Army Logistics Enterprise (SALE) may not meet
             Joint and interoperability Key Performance Parameters and
             emerging Department of Defense (DoD) logistics
             requirements.
            Logistics Metrics Reporting should be consistent with the
             Defense Readiness Reporting System (DRRS).
            There has been a consistent lack of receipt data reporting at
             the Supply Support Activity (SSA) which will seriously reduce
             the amount of complete valid part data collected for this
             metric.

c.   Assumptions

            Army maintenance organizations will take steps to minimize
             Turn Around Time (TAT) and provide assistance to support
             organizations so that their overall LRT is minimized.
            Emerging technology and technology refreshment/insertion
             will be configuration managed and controlled with related



                                 54
                 metrics and measures formally inserted into the LRT process
                 and evaluation.
                The use of improved technology ensures current and future
                 force capability will be measured for decision makers.
                Future force standards are being formulated and
                 conceptualized.
                Data sources are to be identified, validated, and automated
                 (linked where feasible) including data sources in the Life
                 Cycle Management Commands (LCMC) and PM
                 programmatic for the enterprise.
                DA supporting metrics will be consistent with DoD guidance
                 for measuring pipeline and warfighting readiness priorities.
                Contractor delivery is made in the prescribed times based on
                 priority.

D.   Supporting Elements

     i.   Top Level Elements for LRT

          Customer Wait Time (CWT): The supply chain performance
          metric which measures total customer response time (the time
          required to satisfy a supply request from the end user level). CWT
          measures pipeline performance from the unit’s perspective. CWT
          commences when a requirement is created by an entry in the Unit
          Level Logistics System (ULLS) / Standard Army Maintenance
          System (SAMS)/ Standard Property Book System-Redesign
          (SPBS-R) and stops when these unit-level systems acknowledge
          receipt to Standard Army Retail Supply System (SARSS). It
          includes all requisitions filled by Supply Support Activity (SSA)
          which includes those items stocked at the SSA as well as those
          acquired through the wholesale system. The CWT is composed of
          three segments of the pipe line as follows:

          1.     Requisition Order Number Date (ROND) to SARSS1
          2.     SSA processing time
          3.     SSA to Customer

          CWT = Date (or time) of satisfaction of the unit's supply request -
          Date (or time) of issue of unit's supply request

          Fill Rate: A measure of the percentage of time that demands are
          satisfied from items in stock. The metric can be calculated by
          dividing the number of incidents when parts sought from the stock
          point were on hand by the number of total incidents when parts
          were requested from the stock point.



                                    55
Fill rate = number of requisitions filled within a specified time limit
             --------------------------------------------------------------------
              total number of requisitions submitted

Repair Cycle Time (RCT): The elapsed time (days or hours) from
the induction of the unserviceable item located at the repair
facility/maintenance unit until the item is repaired and placed in
stock or reissued. Retrograde time for a given item may need to be
added to establish a complete RCT.

RCT = date (or time) an item is restored/ready for issue – date (or
     time) a failed item is received for maintenance

                or

RCT = RST + TAT

Requisition Wait Time (RWT): An Army supply chain metric
which measures the elapsed time required to satisfy an SSA
requisition that must be sourced from either wholesale or referral
process. RWT measures source of fill performance from the SSA
perspective. The RWT is composed of the several pipeline
segments as shown in supporting metrics:

RWT = requisition fulfillment (close) date (or time) - requisition
submission (open) date (or time)

        RWT Supporting Metrics for Pipeline Segments:

               Requisition Establish Time: Elapsed time between
                the generation of a requisition at the SSA level and
                the record is established in Defense Automatic
                Addressing System (DAAS).

               ICP Processing Time and DAAS Processing Time:
                Elapsed time between when the record is established
                in the automated system and issue of MRO to source
                of fill (SOF). (This segment includes time waiting for
                the Inventory Control Point (ICP) to pull data from
                Defense Automatic Addressing System (DAAS.).

               Depot Processing Time: Elapsed time between the
                date an MRO is issued to SOF and the date of SOF
                shipping confirmation. (This segment includes time
                waiting for the data pull from the Mega center and
                pick/pack time.).


                                56
   Materiel Ship Time: Elapsed time between the date
    of SOF shipping confirmation and the posting of the
    receipt at the SSA. This segment is provided
    because of lost visibility of processes beyond the
    source fill.

   Container Consolidation Point (CCP) Transit Time:
    The elapsed time from source of fill to CCP. Transit
    time between the date of shipping confirmation and
    date indicating CCP receipt.

   CCP Processing Time: CCP hold time or CCP
    processing time (elapsed time between the date
    indicating receipt at and shipment from the CCP).

   POE Transit Time: Elapsed time or transit time from
    CCP to Port of Embarkation (POE) (elapsed time
    between the date indicating shipment from CCP and
    the ticket date (TK_date) indicating receipt at POE).

   POE Processing Time: POE hold time or POE
    processing time is the elapsed time between the
    TK_dates indicating receipt at and lift from the POE.

   Transit Time: Transit time from initial lift from
    CONUS port to receipt at an OCONUS port or
    elapsed time between the TK_date indicating
    departure from POE and receipt at Port of
    Debarkation (POD).

   POD Processing Time: The hold time at POD or
    POD processing time. The elapsed time between the
    TK_dates indicating arrival at and departure from the
    POD.

   Intra-Theater Transit Time: Transit time from POD
    to the installation/Camp/Kaserne or elapsed time
    between departure from POD and arrival at SSA.

   SSA Processing Time: Take-up time or SSA receipt
    processing time. The elapsed time between arrival at
    the SSA and posting of SSA receipt.

   Class IX Repair parts (less Medical-peculiar repair
    parts): all repair parts and components, to include


                 57
                    kits, assemblies, and subassemblies (reparable and
                    non-reparable) required for maintenance support of all
                    equipment.

                   Requisition Date: Date entered at the retail level
                    supply system when placing a requisition for an item
                    from DoD wholesale supply system.

                   Receipt Date: Date entered at the retail level supply
                    system when the materiel for a specific requisition is
                    received from DoD wholesale supply system.

                   Total Requisitions: Sum of Class IX Repair Parts
                    retail level requests from DoD wholesale Supply
                    system.

ii.   Sub-Elements

      Weapon system Not Mission Capable (NMC): The average
      percent of time that a fleet of weapon systems is not fully mission-
      capable. This metric has two components: NMCS (lack of parts)
      and NMCM (lack of maintenance resources).

             Supply Driven Sub-Elements

                   Order and Ship Time (OST) to a designated level
                    of supply: Average time from order placement to
                    receiving the shipment at designated supply level.

                   Stock Availability (SA) at designated level of
                    supply: Percentage of time an order is filled
                    immediately at designated level of supply support.

                   Not Mission Capable Supply (NMCS): The
                    percentage of time (days or hours) the system is not
                    capable of performing any of their assigned
                    mission(s) because of maintenance work stoppage
                    due to a supply shortage. NMCS exists when the
                    parts are needed for immediate installation on or
                    repair of primary weapons and equipment under the
                    following conditions: (1) Equipment is deadlined for
                    parts (2) Aircraft is out of commission for parts (3)
                    Engine is out of commission for parts, etc.




                                58
           Order lead-time: The time between requisition
            acknowledgement and supplier confirmation of the
            order.

           Time to identify a new supplier: The time (in days)
            from the recognition of the need to locate a new
            supplier for a product to the date an agreement
            (contract) is signed with the supplier.

           Performance to customer-request date-maintain:
            The percent of repair orders fulfilled on or before the
            inventory control point requested date.

           Backorder Rate: The number of repair parts or
            spares for a given system/end item which are not in
            stock within a stated timeframe of the time they are
            requisitioned divided by the total demands for parts.
            This is basically the inverse of the fill rate.

           Backorder Duration Time: The average elapsed
            time between a requisition placed for a spare not in
            stock until receipt of the spare part to fill the order.
            Backorders are broken into two categories: overall
            (routine, NMCS, and repair parts) and greater than 90
            days. Also stated, the time to receive procurement
            previously ordered; the Administrative and Production
            Lead Times are contributing factors to this wait time.

           Mean Time to Obtain Back Orders (MTTOBO):
            Average time to fill a back order at the wholesale
            supply level.

iii.   Maintenance Driven Sub-Elements

           Not Mission Capable Maintenance (NMCM): The
            time (days or hours) the system is inoperable due to
            delays in maintenance that are attributable to delays
            in obtaining maintenance resources (personnel,
            equipment, or facilities).

           Product and grade changeover time: The average
            time (in hours) to change a repair line to repair a
            different item.




                        59
   Ratio of actual to theoretical cycle time: The
    percent of time that the actual repair cycle time
    deviates from the standard (or theoretical) cycle time.

   Retrograde Ship Time (RST): The average elapsed
    time from an item failure to the receipt of the item by
    the maintenance echelon specified to repair the item.

      RST = Sum of elapsed times from failure to maint
                                echelon
        -------------------------------------------------------------
                      No. of retrograde incidents

   Turnaround Time (TAT): The average time required
    to complete a logistics task or service. In the case of
    maintenance, TAT is the average time required to
    receive an item from a unit, perform repairs on the
    item and make the item available to the unit or place
    the serviceable item back into the inventory.

    TAT = Sum of the elapsed times to make repairs
          -----------------------------------------------
                   Number of repair jobs

   Controlled Substitution Rate: A measure of the
    number of controlled substitutions per time period for
    a fleet of vehicles. This number may be used as a
    means of comparison over a series of previous
    reporting periods to identify any trends in supply
    within a fleet.

   Float Utilization Rate: The percentage of time the
    float systems are on loan to customer units divided by
    the total time floats are available. This rate provides a
    means of optimizing the number of systems reserved
    as floats. A low value may reveal that less float items
    are required. A high value may indicate the need for
    more float items.

    Float Util. Rate = Total time float items are on loan
            ---------------------------------------------
           Total time float items are available

   Maintenance Task Distribution (MTD): This reflects
    the percent of time that an item is repaired at each



                  60
                      maintenance support level and the percent of time the
                      item is replenished.

E.   Implementation

     b.   Early Planning for Data Collection:

          1.    Where does this metric fit into the life cycle?

                The LRT metrics need to be identified and planned early in
                the acquisition life cycle. For any acquisition program with
                PBL potential, it will be necessary to include LRT
                considerations in the Business Case Analysis. At the latest,
                planning for use of LRT metrics will begin after Milestone B.
                During the System Development and Demonstration (SDD)
                phase, it will be necessary to provide planning information
                for use of LRT metrics and associated automation
                requirements in the Acquisition Strategy, Supportability
                Strategy, Test and Evaluation Master Plan, Statement of
                Work (or performance specification) and Performance Based
                Agreement.

          2.    When should data be collected for this metric?

                Planning for the use of LRT metrics is an iterative process.
                Plans should be made to collect the initial data on LRT
                during operational testing.

          3.    What influence will early data collection have on this
                metric?

                Exercising the LRT data collection and evaluation process
                during operational testing provides a means of validating
                and/or improving the LRT metrics, data collection process,
                automated processes and reporting.

     c.   Data Source and Limitation:

          1.    Identify who is responsible for collecting and reporting
                the data to the PSI, Provider, PM, customer, military
                personnel, government agencies, etc.

                Most of the data required for monitoring LRT will be collected
                by leveraging current logistics business processes and
                existing supply and maintenance database systems. Any
                gaps between data currently available and new data


                                   61
           requirements will be augmented through agreements to
           provide supplemental data through contractor-provided
           processes and systems. A detailed lay-down of Army and
           DoD standard automated management information systems
           and contractor-provided automated systems will be required
           prior to establishment of a PBA.

     2.    What mechanism will be used to collect, report, retrieve,
           and maintain data for this metric?

           LOGSA will extract maintenance LRT and TAT historical
           information from the Maintenance Module of the Logistics
           Integrated Database (LIDB) and maintain historical status.
           Most of the data can be obtained from automated systems
           such as the Logistics Integrated Database (LIDB), Unit Level
           Logistics System (ULLS), DAAS, Distribution Planning and
           Management System, and commercial systems. The
           required data will be input within the automated systems
           both from organic sources (e.g., Soldiers in combat zones,
           military supply pipeline, maintenance activities and other
           logistics functions) and from product support providers which
           perform LRT functions outside the organic channels.

     3.    How reliable is the data?

           It is anticipated that with periodic quality checks, data
           submitted in readiness reports is accurate and timely. The
           LRT data will be reliable with the caution that errors and
           gaps in the data may result from the human element,
           interruptions from combat operations and technology/
           automation infrastructure limitations.

d.   Burden on Field Units:

     1.    Identify if the Soldier is going to be required to collect
           the data.

           Soldiers will be required to collect data at the field and
           sustainment levels in accordance with current policy. No
           additional burden is anticipated with the exception that the
           need for the Soldier to report complete and accurate data
           will be given increased emphasis in order to ensure
           maximum readiness to the Warfighter while ensuring the PSI
           is equipped with all the tools required to manage LRT well.




                              62
     2.    Will the collection of this data be under STAMIS or a
           stovepipe system?

           It is anticipated that new technology will be made available
           to the Soldier to actually reduce the already existing
           requirement for supply and maintenance reporting. It also
           must be clear within the PBA that reporting requirements
           may need to be suspended in areas directly impacted by
           combat operations. The institutional nature of self-reporting
           by field units as well as other organizational entities is a
           challenge.

d.   Automated Sources/Automated Data Recording:

     1.    Can data be automatically collected, maintained, and
           retrieved? If so how?

           Efforts are underway to maximize the data input/recording
           processes at all segments of the supply pipeline through the
           use of active and passive Radio Frequency Identification
           (RFID) data collection devices. Efforts will also be initiated
           to develop automatic data cleansing capabilities in order to
           reduce errors and alert managers to gaps in the LRT data.

     2.    When can the data be automatically obtained?

           Most of the supply data for LRT is captured automatically.
           This data is captured as requisitions are processed through
           the supply pipeline. Once the requisition is entered into the
           ULLS data base the process can be monitored and is
           captured at several echelons: at the unit via ULLS, at the
           Inventory Control Point via CCSS or LMP, and at the Depot
           via CCSS, LMP, LIDB, or by accessing commercial carrier
           data bases (i.e. Federal Express, United Parcel Service,
           United States Postal data base, etc.)

e.   Negative Analysis: State the impact, on all parties concerned,
     if the proper data is not collected at the right time, by the right
     person (or system).

     Lack of LRT data will result in the inability to evaluate contractor
     performance against LRT-related goals for PBL incentive
     payments. Lack of LRT data will also hamper identification and
     analysis of problems in the logistics response functions that are
     needed to ensure combat ready equipment for the Warfighter. LRT




                               63
     data is a requirement for management of end item supply,
     transportation, and maintenance mission areas for the Army.

f.   Policy and Doctrine versus Warfighter PBA:

     1.    What policies may be affected?

           For the most part, existing DoD and Army policy will be
           adequate for providing guidance on use of LRT as a metric
           for PBL purposes. Current force standards are set
           (regulations and policies) although some are obsolete.

     2.    Are there existing regulations/policy/doctrine that
           contains more metric specific information/ guidance?

           The following guidance documents may be useful:

                 DoD 4140.1-R, DoD Supply Chain Materiel
                  Management
                 AR 71-32, Force Development
                 AR 725-50, Requisition, Receipt and Issue System
                 AR 710-2, Supply Policy Below the National Level
                 AR 700-138, Army Logistics Readiness and
                  Sustainment
                 AR 750-1, Army Materiel Maintenance Policy
                 Specific Performance Based Agreements
                 Specific contractor contracts

g.   Data Review Boards: Who will evaluate the metric?

     Post fielding review boards for this metric should be identified and
     negotiated during contract award. Evaluation of the performance
     will be a team effort between the PM, PSI, PSPs, Warfighter
     customer and major contractors. The ultimate responsibility for
     evaluation lies with the Life Cycle Systems Manager or PM.
     Enforcement tracking and monitoring of policies and standards is
     occurring to include Memorandum of Agreements (MOA),
     Memorandum of Understanding (MOU), PBAs, etc. Data review
     boards recommended attendees are: appropriate level
     representatives from supply, maintenance, and transportation, G-
     4/S-4, transportation, AMC, LOGSA, DLA, TRANSCOM, G3/S3,
     and PM Life Cycle Sustainment Representatives. ATEC (including
     DTC, OTC, and AEC) should also be given the opportunity for
     membership on the DRB.

h.   Early determination of System Definition and Usage Factors:


                               64
     1.    Are there caveats/ concerns for specific systems?

           It will be necessary for specific definitions to be developed
           for each PBL case of the end item(s) involved, top-level
           metrics and sub-metrics to be employed and constraints to
           be applied. Ranges for operational tempos to be included
           under the PBL rating system must be defined. Performance
           limits given catastrophic events (e.g. major disaster) should
           also be mentioned. Examples of performance
           limits/exclusions could include: acts of God, combat losses,
           PM assets reallocation, or neglect of government personnel.

     2.    Are there system of systems issues that need to be
           addresses?

           Although it is not anticipated to be a major impact for supply
           and maintenance, system of systems issues may need to be
           addressed. For transportation support of system of systems,
           specific provisions may need to be developed.

i.   Accounting for Model and Configuration Differences:

     1.    What impact do different configurations have on data
           being collected, reported and/or retrieved?

           Although there could be multiple models and configurations
           of end items in the field, no problems are anticipated with the
           capability of accounting for such differences. However,
           there could be exceptions for cases in which unauthorized
           configuration changes have been made to end items or there
           has been a delay in entering model or configuration changes
           into the automated system. In such cases, it will be
           necessary to note all exceptions and to ensure that
           corrective actions are taken as soon as possible to ensure
           support to the Warfighter and that the PSI/PSPs are not held
           accountable for such problems with regard to rating PBL
           performance.

     2.    Are all configurations being reported in the same
           manner?

           Appropriate consideration and allowances must be provided
           to the PSI and PSPs in adjusting the required support for
           new models and configurations with respect to learning
           curve issues in supply, transportation and maintenance
           functions. On the other end of the spectrum, it will also be


                              65
           necessary to provide appropriate consideration and
           allowances for performance in supporting end items which
           are approaching.

j.   Sample Data and Extrapolation versus Total Population: When
     is it feasible to use sample sizes versus total population?

     It is anticipated that LRT data will be collected on the entire
     population and continuously on end items and activities to ensure a
     comprehensive audit trail for government assets. This will provide
     all the data required to evaluate LRT performance and identify
     problems for management attention. Sampling would be
     appropriate during validation of the LTR data collection process
     prior to actual use on fielded systems. Sampling may also be used
     for periodic inspections of functions within the LRT pipeline and
     associated automated processes. The specific sample size
     requirements will vary depending upon the item commodity,
     logistics functions or segments being evaluated, and availability of
     transactions.

k.   Funding Resources:

     1.    What impact will collecting data have on funding?

           Since LRT metrics and data are already widely collected in
           both the Army and industry, no major problems are
           anticipated with regard to funding the collection and
           processing these data. Programmatic funding requirements
           should be justified to ensure resource allocations are
           available. Such data is considered a normal part of
           business.

     2.    How will the collection, reporting, retrieval, and
           maintenance of data be funded?

           On-going investment in improved automated data
           processing systems is very expensive, but is already funded.
           These automating initiatives will actually end up decreasing
           the cost of data collection, processing integration, and
           analysis.

     3.    Who is responsible for funding the data collection
           effort?

           Ultimately, the PM is responsible for the LRT data collection
           efforts with management efforts by the PSI; however, many



                               66
                different organizations throughout DoD are working to create
                an automation infrastructure that will enable collection,
                cleansing and analysis of LRT data.

F.   Implementation Concerns

         Collaboration with DA and DOD stakeholders, government, military
          and contractor on definitions, data sources, metrics and measures
          must occur.
         Data capture/accountability, reporting, and DoD/DA regulation and
          policy enforcement.
         Currency of Force Activity Designation (Modified Table of
          Organization and Equipment (MTOEs), Basis of Issue Plan
          (BOIPs), Full Mission Capability Standards, peacetime and wartime
          structures, etc.) and the readiness regulations governing them.
         Data Furnished by contractors; Current systems do not provide the
          data from requisition submission to receipt by the Soldier needing
          the item. The visibility of parts not traveling within the Standard
          Army supply system will need to be captured.
         Resource constraints, availability of repair parts and maintenance
          personnel, state of personnel training, validation of data sources,
          and Automation availability for timely reporting.
         Need well defined supporting metrics at the
          strategic/operational/tactical levels to determine real value of LRT.
         Need reliable, available, maintainable and supportable network
          centric and World Wide Web (www) applications and systems to
          support a holistic metric system of the logistics enterprise.
         A higher level metric is to add all the responsible supply chain
          provider's response times to get an overall metric. The metric must
          be controlled by the supply chain provider. (i.e., if a contractor is
          performing direct vendor delivery (DVD), measurement of LRT
          would be from order placement to delivery at the acceptance point).
         Varying elements of the definition are available and can be
          measured individually and appropriately as Logistics Response
          Times. Parts ordering and receipt times are available, but on a
          limited basis as the receipt data completion is typically poor.
          Maintenance time is available for levels above organizational
          through Integrated Logistics Automation Program Equipment
          Downtime Analyzer (ILAP EDA), but at organizational it’s only
          available through limited Sample Data Collection (SDC) data.
          Repair times will also need to be captured when the contractor is
          performing the repair of the end item or system at the field and/or
          sustainment levels.
         Communications bandwidth may be limited in theater during
          wartime.



                                    67
G.    Summary

       LRT is an important and complex metric (or set of metrics) that requires
advanced planning and methodical implementation to become a successfully
applied metric in a PBL Strategy. Most of the specific LRT-related metrics are
well known and have been in use by both the government and industry for many
years. Automated systems are in place and technology is being implemented to
convert all manual processes into fully automated processes. The Army will also
need to ensure that the visibility of assets traveling outside of normal supply
channels is captured in order to evaluate the PSI/PSP performance.




                                      68
                     Chapter 6 – METRICS SPREADSHEETS
                     (Double click on spreadsheet icon to open)


Operational Availability



             PBL
      SPREADSHEETS_ALL METRICS_30 March 2006.xls


Mission Reliability



             PBL
      SPREADSHEETS_ALL METRICS_30 March 2006.xls


Cost Per Unit Usage



             PBL
      SPREADSHEETS_ALL METRICS_30 March 2006.xls


Logistics Footprint



             PBL
      SPREADSHEETS_ALL METRICS_30 March 2006.xls


Logistics Response Time



             PBL
      SPREADSHEETS_ALL METRICS_30 March 2006.xls


All Metrics Spreadsheets



             PBL
      SPREADSHEETS_ALL METRICS_30 March 2006.xls




                                              69
                        Chapter 7 – APPENDICES

A.   Acronyms                                    71

B.   Pictorial View of Metrics                   75

C.   References                                  78




                                  70
                       Appendix A – Acronyms

A&R        Automation and Reporting
ACAT       Acquisition Category
ADP        Automated Data Processor
AEC        Army Evaluation Command
AEPS       Army Electronic Product Support
AIT        Automatic Identification Technology
ALDT       Administrative and Logistics Delay Time
AMC        Army Materiel Command
AMDF       Army Master Data File
AMIS       Army Management Information System
AMSAA      Army Materiel Systems Analysis Activity
Ao         Operational Availability
AR         Army Regulation
ASOAR      Achieving a System Operational Availability Requirement
ATEC       Army Test and Evaluation Command

BCA        Business Case Analysis
BIT/BITE   Built In Test Equipment
BOIP       Basis of Issue Plan

CASCOM     Combined Armed Support Command
CCP        Container Consolidation Point
CCSS       Command Commodity Standard System
CLS        Contractor Logistics Support
COMPASS    Computerized Optimization Model for Optimizing and Analyzing
           Support Structures
CPUU       Cost Per Unit Usage
CTASC      Corps Theater ADP Service Center
CWT        Customer Wait Time

DA         Department of the Army
DAAS       Defense Automatic Addressing System
DASA-CE    Deputy Assistance Secretary of the Army for Cost and Economics
DC         Disposal Costs
DLA        Defense Logistics Agency
DM         Defined Mission
DMS        Diminishing Manufacturing Sources
DoD        Department of Defense
DOL        Directorate of Logistics
DRB        Data Review Board
DRRS       Defense Readiness Reporting System
DS         Direct Support
DTC        Developmental Test Command
DVD        Direct Vendor Delivery



                                     71
EIC        End Item Code

FC         Facilities Cost
FCS        Future Combat Systems
FEDLOG     Federal Logistics Record
FF         Failure Factor
FMC        Fully Mission Capable
FOM        Frequency of Maintenance
FR         Fill Rate

HQDA       Headquarters Department of the Army

ICP        Inventory Control Point
IDC        Initial Deployment Cost
ILAP EDA   Integrated Logistics Automation Program Equipment Downtime
           Analyzer

JROC       Joint Requirements Oversight Council

LCCE       Life Cycle Cost Estimate
LCMC       Life Cycle Management Command
LCSM       Life Cycle System Manager
LF         Logistics Footprint
LIDB       Logistics Integrated Database
LIW        Logistics Information Warehouse
LMARS      Logistics Metrics Analysis Reporting System
LMI        Logistics Management Information
LMP        Logistics Modernization Program
LOGSA      Logistics Support Activity
LORA       Level of Repair Analysis
LRT        Logistics Response Time
LRU        Line Replaceable Unit

M&S        Modeling and Simulation
MadmDT     Mean Administrative Delay Time
MANPRINT   Manpower and Personnel Integration
MC         Maintenance Costs
MCTBF      Mean Calendar Time Between Failures
MD         Mission Duration
MHE        Materiel Handling Equipment
MLDT       Mean Logistics Delay Time
MOA        Memorandum of Agreement
MOADT      Mean Outside Assistance Delay Time
MOU        Memorandum of Understanding
MR         Mission Reliability



                                   72
MRDT     Mean Restoral Delay Time
MRO      Materiel Release Order
MSRT     Mean System Restoral Time
MTBF     Mean Time Between Failures
MTBOMF   Mean Time Between Operational Mission Failures
MTD      Maintenance Task Distribution
MTOE     Modified Table of Organization and Equipment
MTTOBO   Mean Time to Obtain Back Orders
MTTR     Mean Time to Repair

NEOF     No Evidence of Failure
NG       National Guard
NMC      Not Mission Capable
NMCM     Not Mission Capable Maintenance
NMCS     Not Mission Capable Supply
NSN      National Stock Number

O&S      Operating and Support
OC       Operating Costs
OLT      Order Lead Time
OMA      Operations and Maintenance, Army
OP       Operational Performance
OR       Operational Readiness
ORR      Operational Readiness Rate
OSMIS    Operating and Support Management Information System
OST      Order and Ship Time
OTC      Operational Test Command

PBA      Performance-Based Agreement
PBL      Performance-Based Logistics
PEO      Program Executive Office
PFSA     Post Fielding Support Analyzer
PHS&T    Packaging, Handling, Shipping, and Transportation
PM       Program Manager
PMC      Partial Mission Capable
PMCM     Partial Mission Capable Maintenance
PMCS     Partial Mission Capable Supply
PMCS     Preventive Maintenance Checks and Services
POD      Point of Debarkation
POE      Point of Embarkation
POL      Petroleum, Oil, and Lubricants
PSI      Product Support Integrator
PSP      Product Support Provider

RAM      Reliability, Availability, and Maintainability
RBS      Readiness Bases Sparing



                                    73
RCT       Repair Cycle Time
RDTE      Research, Development, Test, and Evaluation
RFID      Radio Frequency Identification
ROC       Recurring Operating Costs
ROND      Requisition Order Number Date
RST       Retrograde Ship Time
RWT       Requisition Wait Time

SALE      Single Army Logistics Enterprise
SA        Stock Availability
SA        System Abort
SAMS      Standard Army Maintenance System
SARSS     Standard Army Retail Supply System
SDC       Sample Data Collection
SDD       System Development and Demonstration
SDS       Standard Depot System
SESAME    Selected Essential item Stock for Availability Method
SIPT      Supportability Integrated Product Team
SOF       Source of Fill
SoS       System of Systems
SPBS-R    Standard Property Book System- Redesign
SSA       Supply Support Activity
STAMIS    Standard Army Management Information System

TAMMS     The Army Maintenance Management System
TAT       Turn Around Time
TK_date   Ticket Date
TLSCM     Total Life Cycle System Manager
TM        Technical Manual
TMD       Total Mission Duration
TMDE      Test Measurement and Diagnostic Equipment
TOP       Total Operational Performance
TRADOC    Training and Doctrine Command
TRM       Training Resource Management

UF        Usage Factor
ULLS      Unit Level Logistics System

VIN       Vehicle Identification Number




                                   74
                      Appendix B – Hierarchy Diagrams
                          (Pictorial View of Metrics)


The following pictures are color-coded to indicate typical primary performance
responsibility in a PBL environment. Colored shadows on blocks represent a
shared secondary responsibility. Metrics and roles/responsibilities must be
tailored to each PBL program via performance based agreements (PBAs).

              Soldier/War-Fighter (WF)

              Product Support Integrator (PSI)

              Program Manager (PM) or other United States Government
              (USG)


                           Operational Availability




                                      Ao


             MTBF                               MTTR                MLDT
                                                                     MLDT


 Number of Operating Hours              Maintenance Hours
                                         Maintenance Hours
    Critical Item Failure               Maintenance Actions
                                         Maintenance Actions


Ao: Operational Availability
MTBF: Mean Time Between Failures
MTTR: Mean Time to Repair
MLDT: Mean Logistics Down Time




                                       75
                                    Mission Reliability


                                         Mission Reliability


                  System                                                Mission Duration


         Operational Performance
          Operational Performance                               Mean Time Between System Abort
                                                                 Mean Time Between System Abort


                 Mission                                                 Systems Aborts
                                                                          Systems Aborts


           Successful Missions                                 Mean Calendar Time Between Failures


            Attempted Missions                                     Mean Time Between Failures


Mean Time Between Operation Mission Failure
 Mean Time Between Operation Mission Failure




                                    Cost Per Unit Usage


                                        Cost Per Unit Usage


            Number of Systems                                       Recurring Operating Costs
                                                                     Recurring Operating Costs


              Usage Factors                                          Initial Deployment Costs
                                                                      Initial Deployment Costs


             Operating Costs
              Operating Costs                                            Disposal Costs


           Maintenance Costs
            Maintenance Costs **                                       Non-Logistics Costs


                                                                Cost Influenced by Ao, Reliability,
                                                                 Cost Influenced by Ao, Reliability,
         Frequency of Maintenance
                                                                Maintainability and Supportability
                                                                 Maintainability and Supportability

        Spare and Repair Parts Cost




                                                 76
                                              Logistics Footprint




                                                            Logistics Footprint
                                                                               Reliability &
    Design                          External Factors                                                                   Personnel
                                                                              Maintainability
                         TMD                                 Density                              Failure Factor                    Number of Maintainers
                                                                                                                                     Number of Maintainers
                                                                                                 No Evidence of
               Spiral Development                              DMS
                                                                                                  Failure Rate
                                                                                                                                    Number of Operators
                                                                                                                                     Number of Operators

                      Log M & S                             Facilities *                        Maintenance Ratio                       Technical Manuals
                                                             Facilities *                                                                Technical Manuals
                                                          Facilities Set *
                Open Architecture                          Facilities Set *
                                                             Up Time
                                                                                                                                            Training
                                                                                                                                             Training
                                                               Up Time
               Physical Dimensions                        Transportation                                                                 Transportation

               Power Requirements                             MHE

           Commonality of Components

                      Single Fuel




                                              Logistics Response Time



                                                 Logistics Response Time
                 Supply Driven                     Maintenance Driven
                                                                                                               Customer Wait Time
                 Sub-Elements                         Sub-Elements


                                      Order and Ship                   Not Mission Capable
 Stock Availability                                                                                                Repair Cycle Time*
                                        Time (OST)                     Maintenance (NMCM)


                                                                                                              Requisition Wait Time
                                     Time to Identify a
  Order Lead Time                                                      Turn Around Time * *
                                                                        Turn Around Time                                Fill Rate
                                       New Supplier



Not Mission Capable                         Float
                                             Float                      Retrograde Time * *
  Supply (NMCS)                       Utilization Rate                   Retrograde Time
                                       Utilization Rate

                                    Backorders:
   Controlled                       Duration Time                       Maintenance Task
 Substitution Rate                  Mean Time to Obtain                    Distribution
                                    Rate




   * Level of maintenance dependent



                                                                  77
                            Appendix C - References

Joint Requirements Oversight Council Memorandum: Key Performance
Parameter Study Recommendations and Implementation, 17 August 2006

Joint Capabilities Integration and Development System (JCIDS) CJCSI 3170.01C
(Instruction) CJCSM 3170.01 (Manual) [Print Version], Enclosure B, TBP

DOD Guide for Achieving Reliability, Availability, and Maintainability, August 3,
2005.

USD (AT&L) Policy Memorandum: Performance Based Logistics: Purchasing
Using Performance Based Criteria, August 16, 2004.

USD (AT&L) Policy Memorandum: Total Life Cycle Systems Management
(TLCSM) Metrics, November 22, 2005.

DA Pamphlet 700-56: Logistics Supportability Planning in Army Acquisition,
December 5, 2005.




                                        78

								
To top