Reporting by decree

VIEWS: 205 PAGES: 18

									Support Centre Reporting Framework
There is no specific Support Centre best-practice report writing method, content requirement or
which decisions to make based on the information collected. The actual report format and content
is highly dependant on the individual organisation’s objectives, industry-specific requirements,
customer requirements and financial constraints. Support centers need to demonstrate value to the
business and report those unique metrics.

This being the case, the purpose of this document is to provide guidance, a framework around
Support Centre reporting that highlights the key questions by which to choose your report content
and a checklist of the critical and common key performance indicators used within the Support
Centre industry.

Management information is often the only method available to adequately justify additional
resources and expenditure. Reporting is often subjective and needs to be focused on business
improvement. It is important that results are not just filed away but used as an essential business
tool to justify, develop and continually improve the service.

The following information has been compiled from various industry sources such as Information
Technology Infrastructure Library (ITIL), Knowledge Centred Support (KCS), industry research
and HDAA member feedback and experience.

Reporting Scope
This is a whirlwind tour discovering the ins and outs of reporting fundamentals. It is no more
complicated than being focussed on the end goal and understanding that reports change, evolve and
can become outdated. You may also find a better way of gathering content, writing and presenting
the knowledge.

Management Reporting is a key factor in all the ITIL processes. The Service Desk is ideally placed
to produce extensive management information on the quality of services provided to customers. A
word of warning though, Support Centres can be data factories with just about every tool being able
to record, store and report on anything.

If sound business decisions are to be made regarding the strategic and day-to-day provision of
services, then high quality Management Reports are an important source of information upon which
business decisions are based. Used appropriately, this information supplies vast intelligence to
analyse performance, develop improvement practices and remove practices that don’t support
objectives. The best solution is to combine data from various systems to create meaningful
information about your customer’s experience, your business drivers and your business
performance.

It is important that reporting does not remain stagnant. Reports are improved and/or developed in
accordance with changes that occur within the Support Centre operation and business needs.
Management reports must be designed to determine performance against the documented
commitment outlined in the SLA. These reports should be produced, preferably in a graphical
format, and made available to customers and support staff alike.
Key report characteristics:

     Linked to Strategy – the focus is on measuring how well the Service Desk is enabling both
      IT’s and the supported organisation to achieve its objectives. This focus is derived from the
      Mission Statements and SLAs.
    Quantitative – value judgements are subjective in nature therefore statistical data is required
      to establish the match to results required.
    Accessible – focus is on the ―must know‖ versus the ―nice to know‖. Everything can be
      measured, but at a certain point where it involves a great deal of time, money, and IT
      technology to capture, the costs become prohibitive and when this point is reached, it is
      advisable to move on.
    Easily understood – Each measurement must elucidate value, meaning, and direction.

    Counter-balanced – stay focused on the reason for the measuring, always linking the data to
      the objective. It can be easy to get caught up in the detail.
    Relevant – measurements must accurately depict the person, process, function or objective
      you are attempting to evaluate
    Common Definition - measures with no definition at all are an invitation for endless debate
      and delay.
The key question to ask about the report being considered is,

                         “What do we want to understand and/or act on?”

If you are reviewing the format and content of any current reports you produce then ask the
following questions.

1) What new insights did we gain?
2) How does the new data compare with what we already know?
3) How will we act on this information?

Report Distribution
You need to consider the audience, purpose and frequency. Appropriate action can only be taken
when metrics are reported to the right audience, in the right manner and at the right time. When
considering style and format for your report, know your audience, use terms the audience can
understand, show how you calculated the results and make the results visually stimulating. You may
want to display the report in a common area making it available to everyone in the Support Centre.

          Audience                             Purpose                            Frequency
 Stakeholders                       How is the Support Center           Daily
 Other internal support              helping us meet our business        Weekly
  groups/departments                  goals?                              Monthly
 Business Unit Managers
 Production, marketing etc.
 Management                         Periodic reporting on KPIs          Monthly
  o IT                                focussed on meeting specified       Quarterly
  o Executives                        business goals and objectives.      Annual Summary
                                     Often includes financial
                                      metrics as they relate to the
                                      business.
   Support Staff                    Immediate visibility of             Throughout the day via
                                      individual and team progress         wallboards or like mechanism
                                      toward goals and objectives         Team Leader feedback
                                       enabling Support Centre            Weekly/Fortnightly Support
                                       management and Support              Centre meetings
                                       Staff self management.             Monthly tracking towards
                                                                           business objectives.
   Customers                         Is the Support Centre meeting      Periodic (Monthly/quarterly)
                                       SLA commitments                    Particularly good if it is on a
                                                                           web-support page that
                                                                           automatically updates.


Key Performance Indicators
Performance drivers are processes and behaviours expressed as measures. Remember the old adage
―garbage in, garbage out‖. Set unrealistic or inaccurate KPIs and that’s exactly what you will
measure and the end result will lead you on a merry chase everywhere except to end up at your
business objective.

All reports are designed to indicate the success or gaps in meeting service levels and assist in
focusing our attention on correcting or improving specific areas within the Support Centre
operations. It is important to not only identify which KPIs are being used but for what purpose they
are being used. There are two types of KPIs:

Leading indicators are the drivers or enablers of desired results.

       Key Performance Indicator (KPI) metrics which measure how well the business
        objectives/goals will be met

Lagging indicators are the results.

       They are slow to change and only repeated improvement will affect them

Learn to distinguish between operational metrics, which measure activities that are performed in the
course of doing business—talk time, time at level one, time at level two, first call resolution, time to
recover, time to restore etc., versus KPIs that measure progress toward reaching goals.

Normally, our attention is drawn to areas of need by one or two metrics that are significantly higher
or lower than the agreed objective. Don’t be misled, it is important not to take metrics out of
context. A key facet is to understand how metrics inter-relate. They are mutually dependant, telling
a whole story and lead us further into discovering what our operation is doing and where it is
headed. If you use generic measurements, you will probably get average results. Specific,
appropriate measurement will yield the desired results. Report focus should include both
quantitative and qualitative metrics.
Report Types
Daily Report
Daily reports are designed to be proactive and are critical in preventing smaller incidents turning
into larger or more significant issues. It is about being able to leverage strengths and taking
corrective action where necessary.

At a base level daily reviews of individual Incident and Problem status against service levels should
report on:

       areas requiring escalation by group
       possible service breaches
       all outstanding Incidents

In addition, there are the key metrics that enable us to make resourcing decisions on staff number,
allocation of call types to specific resources, scheduling and the like and incorporates quality
assurance practices. Metrics can also be used as a way of benchmarking against Support Centre
industry averages and standards. The most common metrics are:

                    Metric                                                Basic usage
Average Speed to Answer (ASA)                        A leading indicator used to evaluate and adjust
                                                     staffing and scheduling levels. Will have a direct
                                                     impact on customer satisfaction most likely leading
                                                     to abandoned calls.
Abandon before Answer (ABA)                          A leading indicator - you can’t always control why a
                                                     customer abandons but also pegs to staffing and
                                                     scheduling levels. May also indicate response to
                                                     messages on the IVR (eg. Outage, please don’t log a
                                                     call)
Average Handle Time (AHT)                            A leading indicator that can be used to understand
                                                     the complexity of request and determine staffing and
                                                     training needs.
First Contact Resolution (FCR)                       A leading indicator used to measure analyst
                                                     knowledge level and relative complexity of
                                                     incidents.
Average Time to Resolution (MTTR)                    A leading indicator
Average Hold Time                                    A leading indicator used to evaluate staffing,
                                                     scheduling and quality of analyst call handling.
Number of Incidents in Total and by Analyst          Workload Indicator - capacity and resource planning
Number of Incidents by Priority, Severity and Type   Enables assessment of trends and identification of
                                                     root causes.
Availability                                         Used to measure productivity levels of the Support
                                                     Centre and staff.
Cost per Incident                                    A measure of the Support Center’s cost-effectiveness
Weekly Report
Weekly reports will show leading and lagging indicators. In particular, the weekly reports are a
great source of workload patterns incorporating the daily data.

Weekly management reviews should highlight:

      service availability
     major Incident areas that:

            occur the most often
            staff spend the most time working on
            take the longest time to turn around to the customer
     related Incidents that require Problem records to be generated

     knowledge management processing (# incidents resolved using knowledge articles, avg time
       to resolution using knowledge articles, contributions, quality, usage, reviews and the like)
     known Errors and required Changes

     service breaches

     customer satisfaction

     trends, major services affecting the business

     staff workloads

     qualitative feedback on team performance – results from monitoring support centre accuracy,
       completeness and customer perception of handling of incidents, calls and emails etc.
Another source of reporting content for Incident management can be gained from the HDAA
Incident Management Documentation.

When writing the analysis of the above information, answer the following questions:

1) What are your initial observations regarding the data?
2) Based on your observations, what is your plan of action?
3) What metrics and KPIs will you investigate first?
4) Is there any obvious correlation between particular metrics and KPIs?
5) What are the possible trends, problems and opportunities for improvement? (ask this question
   against people, process and technology aspects)
6) What are the key actions to strengthen or correct our situation against our objectives?
Monthly Report
It is important to note that the monthly report falls into the Lagging Indicator category. Only twelve
(12) reports per year are produced therefore it is critical that they focus and report on how well the
Support Centre enabled the business as a whole in meeting its objectives.
Key Sections
Document Control information is vital to ensure there is no confusion about when, who, where
and what the document is about.

     Document Name
     Version Number

     Date Last Edited

     Produced by

     Author

     Business Manager

Acronyms / Definitions & Descriptions – only describe those that are used within the report. This
is important to provide clarity as acronyms can often be misunderstood, even within the IT
department.

Executive Summary - An Executive Summary that provides a condensed version of the analysis
made using the detailed information in the Daily and Weekly reports. This type of report will
typically be written for someone who does not have the time to review the more detailed reports.
The components of an Executive Summary include:

       Purpose and Scope          What data was measured and why it was measured
       Methods                    How was the data collected (manual, automated, which systems, a
                                   combination of etc)
       Results                    The actual data retrieved, with terms and formulas defined for
                                   your organisation
       Conclusion                 A brief summary of the findings and what it means to the Support
                                   Centre
       Recommendations            Your opinion on the best course of action
       Additional Information     Any pertinent information not covered in the previous section,
                                   appendices, supporting documentation.
Monthly reports should focus on:

       service availability
       overall performance, achievements and trend analyses
       individual service target achievements
       customer perceptions and levels of satisfaction
       customer training and education needs
       support Staff training and education needs
       support staff and third-party performance
       application and technology performance
       knowledge creation, usage and value
       content of review and reporting matrix
       cost of service provision/failure
Proactive service reports
Reporting, whether online or in textual form, is also essential for proactive support at the Service
Desk. Consider the following reports to aid this:

      planned Changes for the following week
      major Incidents/Problems/Changes from the previous week, along with any work-arounds,
       fixes etc
      'unsatisfied' Customer Incidents from previous weeks
      previous weeks' poorly performing infrastructure items (eg, server, network, application).

Conclusion
HDAA is a membership body that is there for you to take advantage of the knowledge and
experience of your fellow members. Don’t hesitate to send us a member request or networking at
our workshops and events in regard to sharing report types, formats and content.
Sources of Information
   Information Technology Infrastructure Library

   HDI Certification Standards

   Knowledge Centred Support (KCS)

   HDAA Staff and Members
                                       XCon
                      IT Support Centre
         Monthly Report - September 2008


                                    XCon

Document Control
Version            1.0
Date Last Edited   xxxx
Produced by        IT Support Centre
Author             Xstatic Talent
Business Manager   Xcellent Entrepreneur
Table of Contents
TERMS & DEFINITIONS ............................................................................................................. 10
EXECUTIVE SUMMARY ............................................................................................................. 11
METHOD............................................................................................................................... 11
RESULTS ................................................................................................................................ 11
Service Availability ........................................................................................................................... 11
        Incident Impact ......................................................................................................................................... 11
        Service Request (SR) Impact ................................................................................................................... 12
Overall Performance, Achievements and Trend Analyses ........................................................ 12
        Customer Perceptions and Satisfaction Levels ................................................................................... 13
        Individual Service Target Achievements .............................................................................................. 14
        Other Key Metrics ...................................................................................................................................... 15
        Service Target Additions and Modifications ........................................................................................ 15
Knowledge Creation, Usage and Value ...................................................................................... 16
XCon Customer Training and Education Needs ......................................................................... 16
XSC Staff Training and Education Needs ..................................................................................... 16
XSC Staff Performance ................................................................................................................... 17
        Workforce Planning................................................................................................................................... 17

APPLICATION AND OTHER TECHNOLOGY ................................................................................... 17
CONCLUSION ....................................................................................................................... 18




e326c303-c70e-4477-9ada-9a7e26939a8e.doc                                                                                                       Page 9 of 18
Terms & Definitions
Term           Definition
XSC            XCon Support Centre
SLA            Service Level Agreement containing the documented commitment of
               services, expected standards and key performance indicators between
               the customer and IT Support.
ACD            Automatic Call Distribution system that manages the telephone
               communication channel of the customer base and reports on all
               telephony interactions.
ICIMS          I’m Clean Incident Management System – the goal of the IM process is
               to restore IT services as quickly as possible with minimal impact and
               utilise the ICIMS technology to manage this workflow.
Incident       Any events that cause or may cause an interruption to or reduction in
               quality of IT services.
KCS            Knowledge Centred Support – additional codes added are:
               C = creation; R = re-use, Q = quality and P = Published (includes both
               internal XSC knowledge base & customer based XPlorer
SR             Service Request - A request by customers to provide guidance, advice
               and documentation on the applications.
XAmple         Customer Relationship management System – newly developed and
               awaiting release during mid December 2008.
XCollect       Debt Collection management system
XPand          New Customer e-learning system accessed through XPlorer. Go live
               date was 14/9.
XPlorer        New Customer web-portal including new knowledge base and direct
               access to XSC via self-service (searching solutions, logging incidents,
               checking status of open incidents and the like) and new e-learning
               system. Go Live date was 3/9
XTenuating     XSC internal Knowledge Management Process and System – only 3
               months into implementation.
XTrapolate     XCon Management Reporting System
XTreme         Financial management system
Executive Summary
The purpose of this report is to provide a monthly synopsis of XCon Support Centre (XSC)
operations. The content focuses on the month of September 2008 and includes an analysis of the
key performance indicators as outlined in the Service Level Agreement (SLA).

Critical to our customer commitment is service availability to ensure that our customers remain at
the highest productivity level as possible to fulfil XCon’s business objectives in Debt Management.

To meet this commitment, our manner of operation and focus is on ensuring rapid restoration of
services to our customers in the event of an Incident, timely response to Service Requests and
improve the quality of the interaction with the customer in support of their specific business need.

Method
The data is collected from our Support Centre tools (ACD & I’m Clean IM System) and input
provided by other IT Teams involved in the activities of Support Centre responsibilities.

Results
Service Availability
Incident Impact
During September, we experienced lower incident levels as a result of two major actions:

    The Natty Network Team installed a new server with larger capacity to handle the increase of
     workload on the XCollect system used by XCon. This has increased the speed at which these
     transactions occur.
   The Acrimonious Application Team resolved the root cause of the regular failure
     experienced in the Xtreme program’s calculation module thereby eliminating the
     reoccurrence of incidents of this type.
We expect to see incidents of these two systems continue to fall over the next three months.

A small percentage of incidents were due to the new XPlorer and XPand applications. Our post
implementation review (PIR) has revealed this as a successful result. Please refer to the relevant
PIR report.
                                               No. Incidents by Product
                                                  3rd Quarter 2008


              5000
              4750
              4500
              4250
              4000
              3750
              3500
              3250
              3000
              2750
              2500
                                                             `
              2250
              2000
              1750
              1500
              1250
              1000
               750
               500
               250
                0
                       Xtreme       XCollect               XTrapolate         XPlorer   XPand

                                                     Jul         Aug    Sep




Service Desk Reporting Framework               September 2008                                   Page 11 of 18
Service Request (SR) Impact
    During September, we experienced lower service request levels even with the
     implementation of the XPlorer and XPand applications.
    We believe that the Acrimonious Application Team resolving the root cause of the regular
     failure in the Xtreme program’s calculation module eliminated some customer confusion
     around setting formulas, linking spreadsheets and how to transmit data to the XCollect
     system.
    There is not much movement in the Service Request levels regarding XCollect where the
     majority of service requests centered on assistance with running various reports. The
     increase in XCon workload will not affect this unduly as the list of Reports remains
     unchanged.
                                                   Service Requests by Product
                                                         3rd Quarter 2008


              1150
              1100
              1050
              1000
               950
               900
               850
               800
               750
               700
               650
               600
               550
               500
               450
               400
               350
               300
               250
               200
               150
               100
                50
                 0
                       XTreme         XCollect                      XTrapolate              XPlorer         XPand

                                                           Jul            Aug         Sep




Overall Performance, Achievements and Trend Analyses
Our response to the Priority Levels was lower in the first part of the third quarter due to outages and
the problems with XCollect server capacity and Xtreme calculation module. Our incident volumes
increased and recovery from backlog was over a period of three weeks. During September our
Response Time stabilised when those issues were corrected.
                                                 No. Incidents vs. Service Requests
                                                          3rd Quarter 2008


              7000

              6500

              6000

              5500

              5000

              4500

              4000

              3500

              3000

              2500

              2000

              1500

              1000

               500

                 0
                                Jul                                    Aug                            Sep

                                                        Incidents        Service Requests




Service Desk Reporting Framework                      September 2008                                                Page 12 of 18
                                                           SLA Priority Level Response Time
                                                                          D
                                                                        YT 2008

              100%


               90%


               80%


               70%               Priority 1
                      80% responded to within 8 minutes
               60%
                                                      Priority 2
               50%                        80% responded to within 15 minutes


               40%                                                             Priority 3
                                                                     80% responded to within 3 hours
               30%


               20%


               10%


               0%
                        Apr                May                     Jun                    Jul                  Aug               Sep

                                                                Priority 1   Priority 2     Priority 3



Customer Perceptions and Satisfaction Levels
Customer satisfaction levels have risen markedly since the resolution of the Xtreme and XCollect
system issues. This is a direct result of having greater stability in these systems. In addition to this,
they report XSC staff are improving in their ability to provide more accurate information at a faster
rate and interacting with them on a more professional, customer focused fashion. As a result, XSC
is gaining more credibility.

As opposed to previous months where we conducted our surveys via telephone and email, we have
integrated ICIMS with XPlorer and are able to maintain an ongoing survey mechanism that is
available for viewing on XPlorer.
                                                            Customer Satisfaction
                                                                   D
                                                                 YT 2008
   100%


    90%


    80%


    70%


    60%
                     SLA Target - maintain a minimum of 80% Customer Satisfaction within Comfort Zone or higher
    50%


    40%


    30%


    20%


    10%


     0%
               Apr                  May                          Jun                            Jul                  Aug                Sep

                                   Miserable              Unhappy            Comfortable                 Pleased     Delighted




Service Desk Reporting Framework                               September 2008                                                          Page 13 of 18
Individual Service Target Achievements
Below is a snapshot view of key XSC metrics during September 2008.

Metric         Description                               Result                           RAG Rating

SLA XSC        Service Level Agreement                   Met
Green          Service Levels met stable or increasing   SLAs to be established for XAmple application
Amber          Service Levels met but trending down      implementation & support
Red            Service Levels not met for the month
AWT            Average Wait Time                         37.6 seconds
Green          <30 seconds                               Linked to Customer Perception
Amber          >30 seconds and <45 seconds
Red            >45 seconds
ABA            Abandon Before Answer                     6.3%
Green          <5%                                       Linked to AWT
Amber          >5% and <7%
Red            >7%
ASA            Average Speed to Answer                   78.6%
Green          Answer >75% in 25 seconds                 Stable
Amber          Answer <75% and >70% in 25 seconds
Red            Answer <70% in 25 seconds
AHT            Average Handle Time                       5 minutes
Green          <5 minutes                                Improved by XTenuation Knowledge base and
Amber          >5 minutes and <10 minutes                XSC staff HDAA SCA training
Red            >10 minutes
FCR            First Contact Resolution                  74.6%
Green          >70%                                      Improved by XTenuation Knowledge base
Amber          <70% and >65%
Red            <65%
FLR            First Level Resolution                    79.3%
Green          >85%                                      Still some complexity in Xtreme incidents
Amber          <85% and >75%
Red            <75%
ESC L2         15%                                       21.7%
Green          >85%                                      Linked with FLR
Amber          <85% and >75%
Red            <75%
MTTR           Mean Time to Resolution                   7.6 minutes
Green          <8 minutes                                Improved by XTenuation Knowledge base and
Amber          >8 minutes and <10 minutes                XSC staff HDAA SCA training
Red            >10 minutes
Availability   Productivity Level of Staff               75.3%
Green          75%                                       Industry average looks to 75% with the remainder
Amber          <75% and >65%                             being spent in company administration; training;
Red            <65%                                      cross support tasks; breaks and the like.

CPI            Cost Per Incident                         $31.05
Green          <$28.00                                   Although in amber, it has reduced from RED due to
Amber          >$28.00 and <$34.00                       improvement initiatives and stabilising of Xtreme
Red            >$34.00                                   and XCollect systems. Continuing with
                                                         improvements will see it reduce further.

Service Desk Reporting Framework                 September 2008                                      Page 14 of 18
Other Key Metrics
KCS – C        Creation of Knowledge Articles           47%
Green          >70%                                     This metric is temporarily high until we reach
Amber          <70% and >60%                            plateau and start to decrease and begin to
Red            <60%                                     increase re-use. It is forecast that we will see
                                                        minimal creation as a norm and only an increase
                                                        as new systems are implemented and current ones
                                                        upgraded.
KCS – R        Re-use of Knowledge Articles             62.3%
Green          >70%                                     This is a newly established metric. This figure has
Amber          <70% and >60%                            risen steadily over the last four weeks.
Red            <60%
KCS - Q        Solution Quality Index of Articles       76.2%
Green          >90%                                     This is a newly established metric and we are still
Amber          <90% and > 80%                           establishing how to draw and use the data.
Red            <80%
KCS - P        Time to Publish (XSC and XPlorer)        ?%
Green          >90% within 90 minutes                   This is a newly established metric and we are still
Amber          <90% and > 80% within 90 minutes         establishing how to draw and use the data.
Red            <80%
CSS            Customer Self-Service                    34.7%
Green          >85% over 85% of incidents experienced   This is a newly established metric. We are
Amber          <85% and >70%                            assessing our data collection method and looking
Red            <70%                                     to a new click-stream method.

FTE            Full Time Head Equivalent                13.5 / 15
Green          >13.5                                    A drop from 17% during August where XSC was
Amber          <13.5 and >10.0                          experience major workload stress. Current level
Red            <10.0                                    adequately covers operational hours and workload.
                                                        Calculation: No of heads minus various leave
                                                        types, training and the like)

Service Target Additions and Modifications
We have agreed to work toward the following new targets; however we are still working on how
best to draw the data. Analysis will be required on whether these targets are realistic in conjunction
with business objectives. Some adjustment may occur.

KPI                                Measure
Time to Publish                    90/90 rule – 90% of what XSC learns/knows from resolving
                                   incidents is on XPlorer within 90 minutes.
Self-Service Use                   85/85 rule – at least 85% of the time customers are using self-
(Call deflection or incidents      service first and at least 85% of the time they are finding what they
resolved without assistance        need
or escalation to XSC)
Ratio of Known to New              30/70 rule – the XSC workload shifts from 70% known/30% new to
Incidents                          30% known/70% new. XSC analysts spend the majority of their
                                   time on new issues. This will depend on customer ustilisation of
                                   XPlorer and XPand systems.




Service Desk Reporting Framework               September 2008                                         Page 15 of 18
Knowledge Creation, Usage and Value
The time saved as a result of the XTenuating knowledge initiative is paying dividends. The XSC
team is still new to the XTenuating knowledge process. There has been some headway in the
number of knowledge articles created and re-use of those articles has increased.

This directly translates to shortened resolution times now averaging five (5) minutes – a saving of
two (2) minutes per incident. This has gained XSC an extra 222 hours per month in capacity.
Thereby, the new initiatives for XPlorer, XPand and the delivery of the new XAmple system to the
customer base by mid December, will have a greater focus and XSC ability to deal with the
identified challenges and observance of the critical success factors in succeeding with these
changes.

Part of this is area is covered by the new KPI measures earlier mentioned.
XCon Customer Training and Education Needs
With the implementation last month of the new XPlorer web-based portal, our customer knowledge
levels in using their particular systems has greatly improved. Our customers are inclined to search
for their own Source of information to minimize the risk of exposing themselves to confidentiality
breaches. However, customers are comfortable with contacting the XSC with more difficult issues.

Their take-up rate of this new service is 34.7% over the whole customer base. Considering this is
only the first month and we’ve had such a positive response, we expect to see that number increase
to at least 45% by the end of October 2008.

The XSC began its new marketing program this month to enhance the customers understanding of
the services delivered by XSC including the awareness and use of our new XPand e-learning
system. The customers were given a walk-through demonstration within the XSC environment and
are excited about the possibilities that this system presents to enhancing their own skill base.

XSC is developing a click-stream based method of reporting on customer take-up and the re-use
and effectiveness of these initiatives.
XSC Staff Training and Education Needs
Three months ago we reworked our XSC training to a platform that clarifies the core skill set,
quantifies the results expected and facilitates cross training. This has enabled better capacity,
productivity and resourcing of XSC staff and a useful tool for ongoing Performance Management.
This philosophy ensures XSC staff knows what they need to know when they need to know it and it
is delivered in three (3) phases – Learn It, Know It, Do It.

We have been able to shorten the time to productivity of new XSC staff from six (6) months to three
months (3) due to three key factors critical to not only new staff but ongoing development:

       the new training platform
       the implementation of the XTenuating knowledge program
       HDAA HDI Support Centre Analyst course




Service Desk Reporting Framework           September 2008                               Page 16 of 18
                                                XSC Technical Skill by Product
                                                      3rd Quarter 2008

                                                                                 Minimum Required Level of Expertise &
              100%
               95%
               90%
               85%
               80%
               75%
               70%
               65%
               60%
               55%
               50%
               45%
               40%
               35%
               30%
               25%
               20%
               15%
               10%
               5%
               0%
                       XTreme        XCollect                XTrapolate               XPlorer                   XPand




XSC Staff Performance
Productivity has significantly increased in XSC individuals and the team as a whole since the XCon
CEO, Mr Xpert Con, physically attended an XSC meet and greet. This encouraged a greater
connect to XCon objectives and induced a measure of pride and a more professional outlook,
leading to better morale.

We have identified two outstanding performers within the XSC team and one individual in the
Acrimonious Application team that we would like to bring on board. These individuals have
received an Xcellent Entrepreneur Award from the monthly peer review that rewards and
recognizes individuals contribution to XSC goals and thereby their impact on XCon objectives.

       Xtensive Knowhow – for her consistent sharing of solutions and problem solving methods.
       Xternal Source – for his unswerving loyalty and inspiring diligence to resolving incidents in
        the shortest time possible.
       Xcessive Cop (AA Team) – for his individual assistance to an XCon staff member in
        refiguring an Xtreme calculation that won that staff member the contract of a life-time.
Workforce Planning
Employee turnover has reduced and we expect that we will be at full head count productivity by the
end of October 2008 when the new team members conclude their probation period.

The shrinkage factor (time off incident management = sick leave, holidays, training and the like)
during September is at an average of 10% - a drop from 17% during August where XSC was
experiencing major workload stress.

Application and Other Technology
We are gearing up to meet the mid December release of the new XAmple customer relationship
management application into the XCon environment. During September the XSC placed its
requirements with Natty Network and Acrimonious Applications Teams in regard to the change and
release timelines.

In the past, XSC has not always been regarded as an integral part to the process of delivering new
technology to XCon. It is vital that the lead time enables XSC to be provided with the relevant

Service Desk Reporting Framework                   September 2008                                                        Page 17 of 18
customer communication content, user and support training, specific guidelines around initial
implementation and escalation processes and the relevant documentation and support knowledge
articles.

The XTrapolate management reporting system is due to be upgraded during October 2008.
Included in this upgrade will be a direct link to XSC reporting tools. This will allow reporting staff
to access key statistics as needed by the management team across IT and in key Executive positions.
XSC is also adding a reporting section to the XPlorer site so customers will have up-to-date
information on XSC SLA objectives, results and explanations. This will be included in XSC’s
marketing campaign.

Conclusion
September has seen our service levels repaired and back on track. Productivity is improving with
the new initiatives showing positive results. Both customer and team morale is higher and XSC is
actively contributing to XCon business objectives.

I have no doubt that XCon shareholders will find us in XCeptional form.

Detailed data on a weekly and daily basis are available if required. Contact the XSC Manager on
XTalent@XSC.com or on 555-719-823




Service Desk Reporting Framework            September 2008                                Page 18 of 18

								
To top