Measuring Customer Satisfaction in project-based software

Document Sample
Measuring Customer Satisfaction in project-based software Powered By Docstoc
					Measuring Customer Satisfaction in project-based software organizations
                                                                                    Murali Chemuturi

Introduction

Project-based organizations place a lot of emphasis on customer satisfaction – rightly so as
customer satisfaction is the key input for their internal process improvement. This is often
obtained using a questionnaire – Customer Satisfaction Survey. This method however suffers
from the drawback that the customers could be emotionally influenced while filling these
questionnaires. Naomi Karten, who has done considerable work and is an expert on the subject
of customer satisfaction (www.nkarten.com) in her book “Psychology of Customer Satisfaction”,
says, “People tend to rate service higher when delivered by people they like than by people they
don’t like”. She also goes on to describe what one can do to be “likable”. More often than not, the
Customer Satisfaction Survey rating received from customer is “perceived” feedback rather than
impartial feedback. It is not to say that we do not get any value from customer-filled CSR forms
but it is to recognize that it could be emotional. It needs to be recognized that the customer is not
one person but an organization – that means, multiple people. While so, only one person
represents them and fills out the survey. Would he consult all concerned before filling out, we
wish he would but he may not.

This gives rise to the necessity to able to compute a CSR (Customer Satisfaction Rating) from
internal data that is free from bias and gives a realistic metric.

Why should we measure Customer Satisfaction with internal data?

Consider the following scenarios assuming that all three projects performed similarly–

First scenario - Customer is a very pragmatic man and is not swayed by influences like “recency
factor”, prejudices of any kind, “one incident factor”, poor judgment, personal stake etc. Also he
keeps meticulous records of the project execution and is expert at data analysis. While it may be
rare to get such a customer, we have to accept that his rating truly reflects the vendor’s
performance.

Second scenario – customer is a normal person. His rating is influenced by some of the
influences mentioned in the first scenario. Let us assume that he rated the performance low. If
this biased low rating were accepted, the personnel involved in the project execution would be
rated low in the organization as a result. They receive lower hikes and bonuses, if any. That de-
motivates them – perhaps they did a fairly good job that merits a better rating.

Third scenario – customer is a normal person. His rating is influenced by some of the influences
mentioned in first scenario. Let us assume that he rated the performance high. As a result, the
personnel involved in that project execution would receive better hikes and higher bonuses, if
any. This further de-motivates the personnel mentioned in second scenario.

Scenarios two and three give rise to the phenomenon of rewards not based on performance or
what is termed as “rewarding the under-performing and punishing the better-performing. This is
disastrous for the organization.

Other impact – and this is even more serious – the organization does not have a realistic picture
of how satisfied their customers really are. In such a situation, the improvement efforts focused
on improving customer satisfaction would be, in all likelihood, set in the wrong path.

I have been using the following method for computing a metric for customer satisfaction based on
internal data in all the organizations that I consult. I developed this by reverse-engineering the
Vendor Rating metric used in manufacturing organizations for rating their suppliers. It is based on
five parameters that I assume are critical to customer satisfaction.
Aspects critical to customer satisfaction

I am discussing those “tangible” aspects that lend themselves to measurement objectively. I
consider the following five tangible aspects to be critical to customer satisfaction.

Quality should come first. The dictum that “customers forget the delays but not the quality” aptly
states the value of quality. I may go on and add that customers forget everything else, if and only
if, the quality delivered is superb.

Second comes the delivery “on-time”. Nothing irritates a customer more than not receiving a
delivery on the promised date – as plans at their end have to be redrawn, resource allocation has
to be shifted and all subsequent actions have to be re-scheduled – causing a lot of
inconvenience.

Third is the money – that needs to be paid by the customer. It is not uncommon that escalation-
clauses are built into contracts. When the vendor chooses to apply this escalation clause and bill
more – it causes the customer a lot of inconvenience to get sanctions and approvals for the extra
payout necessitating a lot of explanations and answering quite a few questions. Price escalations
would cause irritation to customers.

Fourth is the “issue” factor – most projects do have an “issue resolution mechanism”. Some
vendors - in their eagerness to be interpreting the specs “always accurately” and fearing that they
may indeed misinterpret specs - raise more issues. When the issues were raised on valid
grounds, the customer is more than happy to resolve – but when the issues are trivial and in the
opinion of the customer – they are “not” issues – it irritates them.

Fifth is the accommodation and cooperation that the vendors offer the customer. Few projects are
completed without the necessity for raising change requests from the customer. When the
customer raises a change request, he would be happy if the vendor implements it without
postponing the delivery and without increasing the price.

Quality Rating

It may not be far-fetched to say that every project is delivered with defects. Most times defects
may not be detectable immediately upon delivery. But they will be certainly unearthed. If defects
are detected and resolved during warranty period, customer is happy! Customers know this and
vendors know this. The important question is - whether the defects are in acceptable range.
Customer expectation is “zero” defects but all quality professionals know that “zero defect” is a
goal and not a reality. Real life scenario is that we have to live with some defects. Sometimes,
customers specify the acceptable defect density – other times, it is implicit. They select vendors
based on their certifications or market reputation – that is with an expectation about the delivered
defects. Perhaps, reputation does not lend itself for measurement. Using six-sigma philosophy,
we can measure and specify the expected defects based on the “sigma level” of the vendor
organization.

If an organization is at 6-sigma level – the expected defects from that organization are 3 defects
for every million opportunities. If the organization is at 5-sigma level – the expected defects from
that organization are 3 defects for every one hundred thousand opportunities. If the organization
is at 4-sigma level – the expected defects from that organization are 3 defects for every ten
thousand opportunities. If the organization is at 3-sigma level – the expected defects from that
organization are 3 defects for every one thousand opportunities.

The expected number of defects delivered is to be contrasted against the actual number of
defects delivered.
Another important question to be addressed is – “When do we start counting the delivered-
defects””. The corollary question is “Would the defects unearthed during Acceptance Testing
count as delivered defects?” The answer is “Yes” – they are unearthed by the customer - as also
are the defects unearthed in pilot-runs – as also the defects unearthed during the live/production
runs, during warranty period and after.

Generally, defects are classified into three categories, namely, critical, major and minor. I use
only the critical and major defects. Minor defects can sometimes be the difference in perception –
customer may perceive them as a defect, which the vendor may not consider as defect. I classify
all spelling mistakes as major defects – some times they distort the meaning – all times they
cause irritation.

Defect density is computed as defects per unit size or conversely units of product per one defect.
The size is usually measured as either LOC (lines of Code) or FP (Function Points) or any other
size measure used in the organization. The important aspect is to select one and use it in all
measurements.

Now, the formula for computing QR (Quality Rating) for customer satisfaction -

QR = (Actual Defect Density - Accepted Defect Density)/Accepted Defect Density


If the actual defect density is less than accepted defect density, then this metric would be
negative – that means, customer expectations are exceeded.

If the actual defect density is same as accepted defect density, then this metric would be zero –
that means, customer expectations are fully met.

If the actual defect density is more than accepted defect density, then this metric would be
positive – that means, customer expectations are not fully met.

Delivery Schedule Rating

Nothing is more frustrating than not receiving the delivery on an accepted day. The frustration
may be less but it is still there when somebody calls you and tells you that the delivery is going to
be delayed.

The funny part is – even if the delay is a result of a change that the customer requested, it is still
frustrating. It is like “Can’t they accommodate this teeny-weeny change without postponing the
delivery date? The vendors always look for any opportunity to delay the delivery” - right?

Often times, vendors compromise on quality rather than delay delivery. The philosophy is – It will
take some time for the customer to unearth the defect but it doesn’t take any time for the
customer to come down heavily if it is not delivered on time”. Excuses like “Sorry for the defect –
here is the corrected version. In our fervent quest to be on time, this defect crept in” are easy to
deliver and convince.

This is perhaps operational expedience – but customers forget delayed deliveries but they
seldom forget poor quality. When references are sought, they normally are for quality than for
sticking to delivery on time. That is the reason I placed this as second in importance of customer
satisfaction.

To compute this metric, we contrast accepted delivery with actual delivery.
First dilemma is what is the accepted delivery date? Is it the one on the purchase order or is it the
latest one that was accepted against the change impact report against a change request received
from the customer?

Take your pick. If you wish to show a good rating, take the latest accepted delivery date. If you
wish to derive real customer satisfaction rating, then take the one that is on the purchase order.
Or perhaps – as some organizations do – take both – one for internal purposes and one for the
external purposes!

Now, the formula for computing DR (Delivery Rating) for customer satisfaction -

DR= (Actual Days taken for the delivery - Accepted Days for delivery)/Accepted days for
delivery

Use number of calendar days from the date of purchase order to the date of delivery specified in
the purchase order for Accepted Days for Delivery.

Use number of calendar days from the date of purchase order to the date on which delivery was
actually effected for Actual Days taken for the Delivery

If the actual delivery were made before the accepted delivery date, then this metric would be
negative – that means, customer expectations are exceeded.

If the actual delivery was made on the accepted delivery date, then this metric would be zero –
that means, customer expectations are fully met.

If the actual delivery was made later than accepted delivery date, then this metric would be
positive – that means, customer expectations are not fully met.

Price Rating

No vendor can bill the customer for an amount that was not agreed to by the customer – that is if
the vendor expects that his invoice to be respected in full and without any issue. If so, why is this
an important factor?

Sometimes, the contracts are drawn up using an hourly rate with a expected amount with some
variance allowed on either side. In such cases, the final billed amount could be either lower than
the specified amount or higher.

When a Price escalation clause is implemented or an additional payment is requested against a
change, negotiations come in before accepting the escalation – the amount accepted might not
be the same as requested. The fact that extra money as was asked and the resultant negotiations
would certainly cause some frustration in the customer.

Whenever the customer has to pay an amount higher than the purchase order value, the
customer is dissatisfied.

It also certainly pleases the customer, if the vendor asked less money than the value specified in
the purchase order!

To compute this rating, we use the price agreed on the original purchase order and the final billed
amount.

Now, the formula for computing PR (Price Rating) for customer satisfaction -
PR = (Actual amount billed - Price on the purchase order)/Price on the purchase order

The amounts are before taxes, if any.

If the actual amount billed was less than the purchase order price, then this metric would be
negative – that means, customer expectations are exceeded.

If the actual amount billed was equal to the purchase order price, then this metric would be zero –
that means, customer expectations are fully met.

If the actual amount billed was more than the purchase order price, then this metric would be
positive – that means, customer expectations are not fully met.

Issue Rating

Issues crop up during project execution mainly due to lack of clarity in specification or lack or
proper understanding of the specs. Issues may also crop up due to other reasons like there is a
conflict/error in the requirements.

When the vendor raises an issue whose origin is attributable to the customer, customer may have
no concern. However, the customer satisfaction gets affected if the issues are raised due to
improper understanding of the requirements.

Customers expect that any shortfall if any, in exhaustive specification of requirements, would be
bridged by the vendor using vendor’s expertise and past experience. That is the reason why
issues cause dissatisfaction in customers.

The vendor has a reason to raise issues – if customer concurrence is not obtained on time, they
may have to re-work late in the project leading to delayed deliveries. That is why issue resolution
mechanisms are built into software development contracts.

To compute Issue Rating, we use Issue Density. While we can easily compute actual Issue
Density, there is no accepted measure for acceptable-issue density. We also use software size
for computing Issue Density. While issues can directly relate to requirements, we cannot use
number of requirements as the method of defining requirements can vary the number
significantly.

Thus issue Density (Issue Density) computed using the formula –

ID = Number of Issues Raised / Software size

Software size can be any software size measure such as LOC or FP etc.

As there is no universally accepted Issue Density, we suggest that an organizational standard be
defined and continuously improved.

Now, the formula for computing IR (Issue Rating) for customer satisfaction -

IR = (Actual Issue Density - Standard Issue Density)/ Standard Issue Density

If the Actual Issue Density was less than the Standard Issue Density, then this metric would be
negative – that means, customer expectations are exceeded.

If the Actual Issue Density was more than the Standard Issue Density, then this metric would be
zero – that means, customer expectations are fully met.
If the Actual Issue Density was less than the Standard Issue Density, then this metric would be
positive – that means, customer expectations are not fully met.

Cooperation Rating

Most projects would not be complete without a few Change Requests from the customer.
Software Maintenance Projects run on Change Requests (these may be called by various names
– Maintenance Work Order, Program Change Request, Project and so on). It is also common that
all Change Requests would be implemented before delivery. Then how do Change Requests give
raise to Customer dissatisfaction?

Change Request causes additional work to the vendor and its impact is felt on two aspects – one
is on delivery schedule and the other is on cost. In some cases vendor absorbs both. In some
cases, vendor absorbs the impact on price and passes on the impact on delivery schedule to the
customer. In some cases, vendor absorbs impact on delivery schedule and passes on the impact
on price to the customer. In the remaining cases the changes are rejected.

The customer would be happy when Change Requests are accepted without impacting either
price or delivery schedule. But it is not possible to do so for the vendor all the time. That is why
we compute this Cooperation Rating.

Now, the formula for computing CR (Cooperation Rating) for customer satisfaction -

CR=(No of change requests received - No of change requests implemented without
affecting delivery date or price)/No of change requests received


If the number of Change Requests Received were same as the number of Change Requests that
were implemented without affecting either deliver schedule or price, then this metric would be
zero – that means, customer expectations are fully met.

If the number of Change Requests Received were greater than the number of Change Requests
that were implemented without affecting either deliver schedule or price, then this metric would be
positive – that means, customer expectations are not fully met.

There is no way to exceed customer expectations in this rating!

Composite Customer Satisfaction Rating

Now having computed all the five ratings critical to achieving customer satisfaction we are ready
to compute CCSR (Composite Customer Satisfaction Rating).

Obviously, all the five do not have the same importance in achieving customer satisfaction. They
could also wary from organization to organization and customer to customer. Some customers
may perceive that quality is of utmost importance; some may perceive delivery schedule to be of
paramount importance; some may perceive price to be of highest importance. Therefore, it is
necessary to assign weights to each of the above five ratings to arrive at a reasonable CCSR.

The sum of all the weights must be equal to 1 to get a meaningful CCSR.

I Use the following weights:

Sl. No        Rating                                   Weight
1             QR – Quality Rating                      W1 = 0.35
2            DR – Delivery Rating                   W2 = 0.30
3            PR – Price Rating                      W3 = 0.25
4            IR – Issue Rating                      W4 = 0.05
5            CR – Cooperation Rating                W5 = 0.05
             Total Weight                           1.00

Now the formula to compute CCSR – Composite Customer Satisfaction Rating is –

(CCSR)= 5-(QR*w1 + DR*w2 + PR*w3 + IR*w4 + CR*W5)

The above formula gives CCSR on a 5-point scale. It is possible that CCSR may be greater than
5 in some cases. When CCSR is greater than 5, it means that customer expectations are
exceeded.

Use of CCSR

I do not advocate doing away with Customer Satisfaction Surveys. Ultimately, what the customer
perceives is also important.

Consider these facts of life – there is one person who fills our Customer Satisfaction Survey but
many people in customer organization use our product. We may manage his expectations and
get a good rating. But the other users (some of them could be decision-influencers) of our product
would certainly unearth the defects in our product. The person who filled the customer
satisfaction may separate from the organization or his role may be changed.

Therefore, we cannot rely on perception-based rating alone. Contrasting Customer Satisfaction
Survey rating with CCSR allows us to learn lessons and improve our processes.

Suppose that internal CCSR agrees with Customer Satisfaction Survey rating, the customer
perception is in sync with reality. We are managing customer expectations as they should be
managed. Our strengths are equal in service and expectation-management. This gives realistic
picture to management. In this case, we need to take corrective action based on the rating.

Suppose that the CCSR is way below Customer Satisfaction Survey rating, the customer
perception of our service is better than reality. This is not our muscle – it is rather fat. If we
continue to pat ourselves because the customer perception of the service is high, we are having
an inflated view of our capability. Resources may become complacent in rendering real effective
service and continue to place more emphasis on expectation-management than on service. In
this case, we need to train our personnel to improve the service.

Suppose that the CCSR is way above Customer Satisfaction Survey rating, the customer
perception of our service is poorer than reality. This shows that we are concentrating on service
without any concern for expectation-management. Interpersonal relations and communication
with the customer – these are being neglected. In this case we need to sensitize our resources in
expectation-management.

There is scope in this method for organization-based adaptation. Some of the above ratings may
be dropped, or substituted or perhaps new ones added to suit the specific organization

*********************************************************************
Your feedback is gratefully solicited – please email your feedback to the author
murali@chemuturi.com
*********************************************************************
About the Author:

Murali Chemuturi is a Fellow of Indian Institution of Industrial Engineering and a Senior
Member of Computer society of India. He is a veteran of software development industry
and is presently leading Chemuturi Consultants, which provides consultancy in software
process quality and training. He can be reached at murali@chemuturi.com

				
DOCUMENT INFO