Pacific Consulting Group

                September 4, 2007
By law, the Internal Revenue Service must measure customer satisfaction with each of its
interactions with taxpayers. This has led to a proliferation of surveys and reports for every office
at every level and for every major function the IRS performs. These surveys and reports had
consumed considerable $ and staff resources without noticeably improving either customer
satisfaction or operational efficiency. As a result the IRS re-examined its customer satisfaction
measurement and improvement approach. Using evidence-based management, it has adopted an
alternative way to capture and use customer measures to change its processes, products and
communications with taxpayers. With the new approach, the agency has saved about $2M/year
in survey costs while producing impressive customer and efficiency gains.

The Case Against Using Surveys for Site-Level Customer Satisfaction Measurement
Figure 1 shows that in an IRS service environment, service performance at the site level has a
negligible impact compared to case and demographic factors. 1 This means that there will be site
differences in survey scores, but they will not be due to service performance. In addition, Regulation
RRA 98 language permits and even encourages alternative “meaningful” ways of reporting customer
satisfaction information. Subsequently, IRS Counsel determined that for balanced measures purposes
the Small Business and Self-Employed Division (SB/SE is one of the IRS’ four major operating
divisions) could abandon expensive territory/site level survey measurement and substitute
operational measures for important customer service attributes (like timeliness), provided they
retained the corporate-level survey measures.

  In light of the evidence, why do many organizations persist in investing substantial resources in site-level customer
satisfaction measurement? Part of the answer is that market research delivers numbers, even if they are based on
subjective opinions; everyone assumes that numbers are objective and valid. Furthermore, the balanced scorecard
methodology was developed by accountants and engineers not familiar with market research data, so they just
assumed survey numbers were as valid and accurate as financial and operational measures. Moreover, upper level
managers like the feeling of control and accountability that the numbers seem to provide and are reluctant to part
with them, even if there are problems (and subordinate managers are reluctant to complain too loudly). Finally, there
are business incentives for consultants who develop the balanced measures and market researchers who collect and
report the data to perpetuate the system. More numbers means more work for consultants.
         Figure 1: Rank Order of Factors Driving Customer Satisfaction Scores*

                             Exam                                        Collection
           Disposition of Exam (Agree, No tax               Taxpayer Representation (Self, Paid
           change, Disagree)                                professional)

           Cycle Time                                       Type of Lien (Regular, None)

           Taxpayer Representation (Self, Unpaid
                                                            Length of Process
           professional, Paid Professional)
                                                            Closure (Uncollectible, Full pay,
           Type of Exam (Office, Correspondence,
                                                            Installment agreement, TP delinquency
                                                            investigation, OIC, Other)

           Type of Taxpayer (< $25K, > $25K, SB/SE)         Territory (approximately 160)

           Territory (approximately 160)

             * Based on analysis of over 26,000 Exam and 20,000 Collection cases for CY 2001. Factors are
             listed in order of their impact on the overall satisfaction ratings for Exam and Collection.

Table 1 compares some commonly held beliefs about site level customer satisfaction measurement
with the evidence from both the IRS experience and elsewhere. The overwhelming conclusion from
this table is that site-level customer surveys are a bad idea, even at zero cost. The fact that they are
expensive and serve to focus attention on measurement vs. action compounds the problem.
Table 1. Beliefs vs. Evidence on Site Level Customer Satisfaction Measurement
Beliefs About Site-Level Customer
       Satisfaction Measurement
• Differences in customer satisfaction           • PCG research shows that in most IRS interactions with
  ratings are attributable to customer service     customers, differences in ratings are mainly driven by variables
  performance at the site level.                   like case outcome and customer characteristics, not by
                                                   differences in service performance at the site level.
• Regulation RRA 98 requires customer            • RRA 98 does require customer satisfaction measurement, but
  satisfaction measurement at the site level.      provides significant leeway to organizational units in how they
                                                   achieve that goal. Site level surveying is not required.
• Managers and employees control the             • Particularly in a government environment where equality of
  “levers” of customer satisfaction at the         treatment across units and regulations govern the interaction,
  site level.                                      managers’ and employees’ hands are often tied with regard to
                                                   what they can do to please customers.
                                                 • Customers themselves “co-produce” their service outcome with
                                                   the assistance of IRS employees. Particularly for the IRS—and
                                                   especially for TAS—customer confusion, mistakes, misdeeds,
                                                   delays, and so forth contribute to their own problems and
                                                   frustrations. Because the service outcome is co-produced, in
                                                   many cases, the best way for the IRS to get things resolved is to
                                                   inform, motivate, and educate customers to do their part of the
                                                   job correctly.
• Customer surveys are the best way to           • Customers are notoriously poor—and in many cases biased—
  measure all customer service attributes.         judges of attributes like time, fairness, and quality.
                                                 • Internal operational, or “customer-facing,” measures are more
                                                   accurate measures of attributes like elapsed time and service
• Improvement occurs by slowly but surely        • Improvement occurs by revisiting the existing processes from
  making incremental changes in existing           the customer perspective and using those customer insights to
  processes.                                       change processes, products/services, and communications.
                                                 • Learning by doing (vs. measuring) accelerates improvements.
• Comparative measures of customer               • Recent organizational research suggests that comparing sites on
  satisfaction ratings are the most effective      customer service scores can backfire. Those rating high tend to
  way to influence employee behavior at the        attribute their success to their own superiority (and become
  site level.                                      complacent) and those with low scores tend to attribute them to
                                                   factors beyond their control (and stiff arm the results).
• Though there are problems with holding         • Experience in the IRS and elsewhere has shown there is a better
  lower level managers accountable for             way. Below we describe a superior approach.
  customer satisfaction scores, it is the best
  system available for ensuring
  responsiveness to customer concerns.

The Alternative Approach
Figure 2 illustrates the customer-driven innovation approach finally adopted within SB/SE. The
results have been dramatic, with several important innovations having been delivered over the past
three years. Perhaps the most dramatic improvements came from the Adjustments program (the
project started in SB/SE and was later taken up by the Wage and Investment Division in a subsequent
reorganization). Using the develop-pilot-disseminate system, Adjustments achieved a 40% reduction
in time to close a case (a key customer and operational measure) and a 25% increase in efficiency.
Because the improvement ideas were developed by Adjustments employees, they dealt directly with
Customer Service Rep frustrations and had a positive impact on employee satisfaction. No changes in
law or even the Internal Revenue Manual were needed to achieve these breakthroughs, only changes
in processes and customer communications.


                      Figure 2. Recommended Approach to Action

                                                                                                           Area 1         Area 2   Area 3
                  •   Innovation teams use market research to
                      work issues, develop pilot initiatives                                     T1         T2        T3
                  •   Initiatives piloted at field level
                  •   Proven high impact initiatives                                                         •   Fixes
                      disseminated                                                                           •   Dissemination plan
                  •   Field responsible for actions (not scores)
                  •   Improvement occurs as field implements
                      proven fixes

                                                     •    Less tracking, more diagnostics in support of changes
                                                     •    Efforts focused on key issues where SB/SE can succeed
                                                            - Synchronized with business strategy
                                                            - Practical, doable
                                                            - Vital few vs. equal attention everywhere


Figures 3 and 4, taken from a presentation Tom Cooper and Ellen Bell (now retired) made to the
Council for Excellence in Government, show the dramatic pilot-control differences from the
Adjustments project.

                                  Fig 3. Adjustments Case Closure Improvements

                                                                                       ACEIT           Control

                       Average Days to Close

                                                60                                                                            52
                                                40                                                    36                                            34
                                                30                                26
                                                20                                                                                          14
                                                     10                   8

                                                         March                April              May                  June                       July

                                                                         • Philadelphia campus results
                                                                         • Individual taxpayer cases
                             Fig 4. Adjustments Productivity Increases

                                              ACE-IT Case   Non-ACE-IT Case

Cases Closed perStaff Hour
                                    4.14                                         3.94



                                      Case Type 1                       Case Type 2

                                           • Ogden campus results
                                           • Business taxpayer cases

To top