1. To examine briefly the ways in which client satisfaction is conceptualized and measured
2. To examine the ways in which client satisfaction/client perspectives may be incorporated
into a continuous quality assurance process.
3. To examine briefly processes and methodological issues related to measuring client
How is client satisfaction conceptualized and assessed?
Client satisfaction appears to be conceptualized in a variety of ways. This may be related to the
fact that ‘client satisfaction’ is a term that subsumes a variety of processes and types of indices.
For example, a government of Alberta website notes that:
“Surveys to assess client satisfaction range from the very general ‘how do you like us
approach’ to ‘do you know about us’ to ‘how well did we do last time’ to specific ‘what could
be done to improve the mediation experience - mark 1 - 7 that apply’. The nature, design, focus
and delivery method of the survey varies with its sponsor and its purpose”.
Schmidt and Strickland (1998) define satisfaction in terms of the ability to meet clients’ service
expectations; others define satisfaction in terms of clients’ perceptions that their needs have been
met (e.g. perceived effectiveness of service in addressing presenting problems); others studies
include elements that measure whether the criteria required for ‘good service’ have been met
(e.g. client was provided with information about complaints/appeals process or calls returned in
timely fashion). Some are multi-dimensional (i.e. “Client satisfaction surveys assess whether the
client received the service expected, in a timely fashion, and whether they found it helpful”
Assessing Client Satisfaction
level of satisfaction may be measured by the extent to which clients express positive and
negative feelings about specific aspects of the services provided. The level of satisfaction
with a range service features is typically assessed using on an ordinal (5 or 7 point) or
nominal scale ranging from very dissatisfied to very satisfied
degree to which clients agree with statements about the nature of services provided. In
this case, the indices reflect client perspectives of the extent to which services meet the
standards for quality (often referred to as performance accountability) such as ‘I was
treated by staff with courtesy and respect…’. This may be accompanied by a question or
questions that request directly ratings of satisfaction
qualitative feedback in the form of individual interviews and/or focus groups
Potential relevance of client satisfaction information:
The specific role that client satisfaction is assigned in quality assurance processes also varies by
site. Some frameworks conceptualize client satisfaction as one of several outcomes (i.e. positive
client experiences are ends in themselves). Others conceptualize client satisfaction as outputs.
Those who conceptualize these indices as outputs caution that level of satisfaction may influence
outcomes, but is not synonymous with satisfactory service outcomes.
As outputs, client differences in perceptions of and satisfaction with services are hypothesized to
mediate or be related to outcomes. Within consumer models, high levels of client satisfaction
reflect the client’s perception that the service has met his/her needs and expectations. These
positive experiences are expected to influence future behaviour (i.e. referral of service to others
or if a similar need arises again and given a choice, satisfied customers are likely to provide
repeat business). The practice of obtaining client perceptions of service has expanded to many
other sectors including health and mental health, victims services in the criminal justice system,
and child welfare (e.g. mediation services in child welfare legal context, foster care providers,
etc). Within there contexts, the development of more client-centred and client focused services
represents the shift to recognizing clients’ agency and the value of engaging clients as
participants in service rather than recipients of service. For example, level of satisfaction among
foster care providers is examined in a number of studies, because retention of foster care
providers may be mediated, at least in part, by their experiences with the system (e.g. see Rodger
et al., 2006). The experiences of children in care are perceived to be important for their healthy
physical, social and emotional development. The likelihood of re-unification and improved
family functioning may be enhanced if parents can be engaged in rather than compliant with case
planning processes. Some associations between client satisfaction and outcomes are presumed
on clinical bases, without assessing directly the extent to which satisfaction has an influence,
above and beyond other factors.
Obtaining client perspectives may also enrich and assist in the interpretation of administrative
and clinical outcome information. For example, high rate of drop-out or non-compliance with a
particular program may be related to perceived lack of cultural sensitivity, language barriers,
perceived disregard or indifference of staff; if clients in a particular neighborhood are
increasingly unable to meet their goals, a client satisfaction survey might reveal that the closest
substance abuse treatment center doesn’t provide day care. Higher rates of family reunification in
one neighbourhood may be related to accessible family oriented outpatient services; maintaining
regular family contact may be related to flexible scheduling or to access to transportation. In
these examples, client perspectives provide one source of information about gaps in services and
whether services are working as expected. Service areas that are less positively endorsed may
provide the basis for setting priorities within continuous improvement frameworks. Thus,
although referred to as indices of client satisfaction, dissatisfaction typically drives service
To be useful indices have to reflect client’s actual needs and specifically the needs that are the
targets of the service. “Demonstrating that consumers believe that a program is doing an
excellent job on an activity that they consider irrelevant is not useful to anyone” (Young et al.,
1995; cited in Harris and Poertner, 1998, p. 30).
In a guide designed to assist managers of public services across Canada, Faye Schmidt & Teresa
Strickland (1998) outline several potential benefits of client satisfaction assessments:
Identify opportunities for service improvements
Identify what clients want as opposed to what organizations think they want
Allocate resources more effectively to meet client priorities by targeting high service
priorities and reducing or eliminating services that clients do not value, (where
Develop proactive responses to emerging client demands, reducing crises and stress for
staff and clients
Provide feedback to front-line staff, management and political leaders about program
Evaluate the achievement of the organization’s mandate and even substantiate
amendments to the mandate
Strengthen the strategic planning process
Evaluate the effectiveness of new program strategies (for example, assess success of
newly implemented technologies from the clients’ perspective)
Validate requests for increased resources to areas in need of improvement.
They also indicate that client satisfaction surveys need to ask questions about the following five
1. Client expectations (e.g. suggest use of rating scales and/or open-ended questions to find
out, for individual service features, what level of service would be considered “very
satisfactory”. They argue that if a gap between actual and expected service is evident, the
dissatisfaction that emerges from this gap can be managed by better informing clients about
the standards and scope of services)
2. Perceptions of service experience (i.e. timeliness, accessibility, level of respect/courtesy, etc)
3. Level of importance (i.e. how important the measured service features are to clients)
4. Level of satisfaction (overall and on specific service features)
5. Priorities for improvement (argue that information about satisfaction and relative importance
can help the organization making decisions about resource allocation in order refine services
in ways that are important to the client)
In contrast, the Alberta Labour Relations Board uses client satisfaction within the context of
performance accountability. Their description of the purpose of client satisfaction assessments
is pasted below (see http://www.alrb.gov.ab.ca/feasibilityreport.html).
Although client satisfaction surveys can be used as information gathering initiatives for
general knowledge, it is an expensive and time consuming way to obtain "feel good"
In a nutshell, client satisfaction surveys are better used:
as instruments of accountability (for public bodies it is seen to be one method of keeping
in touch with the community which the body serves and of reporting on the body's ability
to meet its established standards for service by "assessing the effectiveness, efficiency
and quality of service against stated objectives" [Ont]),
to provide specific information for use internally in setting performance targets
(standards for service); and
note – index of perceived effectiveness from client perspective
to provide specific information for use in developing action plans to improve
performance (best practices)
to provide specific information for use in the allocation of resources.
Optimally, client satisfaction surveys should be:
directly related to performance goals, best practices or operational aspects of the tribunal
clear and concise
seek specific versus "feel good" information
re-usable on a regular basis with the expectation of receiving comparable information.
Recommendations such as these may be helpful in many social service contexts but among
“compelled users” such as in child welfare, there are a variety of other issues to consider
When there are multiple ‘clients’, whose ‘wants’ should be primary?
Are ratings of satisfaction with process independent of the outcome of those processes?
In the face of satisfactory outcomes (e.g. child safe and doing well because removed from
home), how much should dissatisfaction of one party influence changes to service
Service features that generate the most dissatisfaction may be non-negotiable (e.g.
practices highly regulated by legislation or standards)
Process involved in assessing client satisfaction:
When integrated as part of a continuous improvement quality assurance framework, client
perspectives may be incorporated into strategic planning activities and provides one basis for
identifying service areas of potential modification/refinement. This requires a series of activities
and resources to:
define objectives (purpose for collecting and how information will be used; e.g. program
specific evaluations, sub-population specific evaluations, agency wide focused
assessment of particular service elements, etc)
identify objectives, develop methodology (sample selection and data gathering techniques
such as survey or focus group)
design or select an instrument or interview guide. This may involve employing processes
to obtain stakeholder feedback/input (e.g. focus groups) and/or pilot testing an instrument
the choice is to create a new one
analyze and summarize results
disseminate and discussing findings
identify service improvement priorities
develop a service improvement plan
implement the plan
Some organizations administer generic/standardized instrument to all clients, with comparisons
across sub-groups of service recipients (e.g. Huebner, et al. 2006; Florida state uses the CSS, a
standardized tool state wide and analyzes trends by target group). Some standardized measures
of client satisfaction are available (e.g. CSS). Other agencies describe program or population
specific evaluations (e.g. parent with children in care; case management services; foster care
providers, children in care, users of mental health or substance abuse services, etc).
The use of generic questionnaires may provide sufficient detail about satisfaction with specific
program elements, in a way that informs program planning. To deal with this issue, some
evaluations of client satisfaction include methods to assess both the general elements of service
and program specific components. For example, in Kentucky, a general customer satisfaction
survey for child welfare clients, community partners, and foster parents is conducted every two
years. They report that most survey results were fairly stable over the two years and provided
little specific information about how to improve practice (i.e. specific needs of client groups and
the effects of specific practices). Kentucky augmented the state level survey with more in-depth
but flexible and targeted assessments of specific sub-populations, soliciting specific information
on aspects of service delivery such as perceived effectiveness, service needs, and areas for
improvement. Five specific surveys were conducted in 2004-2005 State Fiscal Year to assess the
needs or effects of:
Services and partnerships with fathers;
Engaging youth in decisions about their foster care and transition to adulthood;
Expanding the service array to meet the needs of children and families;
Improving the relationship with courts to speed permanency for children; and
Family Team Meetings (FTMs) as a method to reduce child abuse, prevent removal from
their home or disrupted placements, establish interventions to improve reunification and
adoption, and to coordinate service delivery. The information collected in the targeted
assessments was used to identify and implement service improvements.
Similarly in Florida, in addition to the standard set of questions administered to other groups of
clients, a program specific survey about adoption services and supports was included for clients
with finalized adoptions from child welfare and who were receiving adoption subsidies. In
Washington, information from six focus groups with children age 11 to 17 who were in foster
care was used to make revisions to the department’s administrative rules that were aimed at
providing “normalcy” for children in foster care.
Analysis of data by sub-populations appears to be important in understanding variability within
client populations. For example, in Kentucky, evaluation of the Comprehensive Family Services
Initiative indicated that effects of the shift in practice were less pronounced among parents in the
protection and permanency group, compared to foster care and pre-adoptive, community partners
and clients receiving family support or child support services. In the Kansas evaluation of
children placed in care, client satisfaction varied for the parents and children involved. Seventy
per cent of the children older than 14 were satisfied with the services received, compared to 47%
of their parents. Although there are differences in the services provided to children and parents,
it is also possible that evaluations of service are coloured by the outcomes resulting from those
Guidelines for Client Satisfaction Measurement Activity. An ESS Process for Measuring
Client Satisfaction (sections pasted from http://ess.nrcan.gc.ca/intl/cs/ch4_e.php)
Choose an approach for obtaining client feedback
Several approaches can be used to measure client satisfaction, including: client surveys (mail,
telephone, electronic), client consultations (focus group sessions, panel discussions, personal
interviews), and observations. Any one or a combination of these approaches may be appropriate
depending on your objectives and, of course, your constraints. Each approach has advantages and
Client Surveys - are usually undertaken when you wish primarily to obtain statistical
(quantitative) data regarding satisfaction among your client population. Normally, feedback is
sought by telephone, mail or electronically from a representative sample of the target client
population. In a probability sample, findings would be projectable, within a given margin of
error, to the entire population of clients. Thus, it is possible to draw rather definitive conclusions
regarding the views of all clients in the target population towards a product or service.
statistical reliability limited ability to explore issues
data can be projected relatively costly
allows for trend monitoring no face to face contact
Response rates to surveys (particularly mail-outs) will be affected by:
the format of the questionnaire (e.g. brevity, easily understood)
the timing of the survey
the credibility attached to the survey
the nature of the questions asked (if topical and interesting to client)
the link to the benefits for the client
assurances of confidentiality
Client Consultations - are undertaken when your goal is to obtain qualitative feedback from
selected clients. Qualitative research is explorative and is often used to give managers a better
understanding of an issue prior to conducting more costly, in-depth research. Qualitative
feedback cannot be taken to represent the views of clients in general; findings from client
consultations should be considered indicative rather than definitive. Client consultations can be
formal or informal and can be extensive or limited in scope. Three common types of
consultations are: focus groups, panels, and personal interviews. Formal focus group sessions
usually involve establishing screening criteria to recruit panelists that are representative of the
target audience, a moderator's guide of discussion themes, the use of a moderator to guide the
discussion and manage the group dynamics, and a report documenting the results.
A focus group is much like a group interview and, as with personal interviews, a lot of valuable
information can be documented (even from verbatim comments) in a relatively short period of
time. Panel discussions are also similar to focus groups but differ in the degree of formality used.
Many panels are formed for advisory type purposes, hence panelists are often chosen for their
expertise, knowledge or experience with a particular issue or subject. Panel discussions do not
necessarily require formal recruitment screening criteria, special facilities or professional
moderating. However, if an objective of the discussion is to obtain formal feedback pertaining to
client satisfaction, you should approach the discussion systematically. Personal interviews can be
conducted by telephone or face-to-face to allow for in-depth probing. Provided confidentiality is
assured, they are often ideal for obtaining feedback on sensitive topics or issues, especially when
conducted one-on-one. You should develop and use an interview guide when conducting
personal interviews. This will help keep the interview on track, provide consistency across
interviews, and serve as an aid in assessing findings.
can explore in detail how clients view cannot project findings to population
an issue or concept does not yield statistical measures
can adapt instantly to client responses results are not conclusive
can access hard to reach audiences
Other methodological issues:
Additional efforts may be required to attain adequate response rates if mail surveys are
used (e.g. letter introducing the purpose mailed prior to survey mail out and follow up
letters if do not return by certain time). In Kansas, the validity of client satisfaction
assessments for parents, children and foster care providers was compromised because of
the low rates of return for mailed surveys. Telephone surveys are being considered to
improve data collection.
Consider potential bias imposed by sample selection criteria. For example, the use of
closed cases excludes the perspective of families receiving long term services (although,
these files will be presumably closed at some point. Some of the cases closing this fiscal
year will should include cases open for months as well as several years). If open cases are
used, clients may be concerned about the impact of their responses on service decisions,
even if confidentiality is assured. Social desirability influences may be greater if
evaluations are conducted by the agency rather than an external resource that is
independent of the agency.
Some sites refer to assessing the extent to which client and worker perceptions are
aligned regarding the relative importance of service elements as a potentially useful
strategy in staff education and service refinement
Study by Perreault et al., 1993 (cited in Harris and Poertner, 1998) indicated that both
satisfaction and dissatisfaction may be expressed by the same individual, depending upon
the wording of the question. Factors such wording and the order of questions may
produce response bias and are important considerations for validity and reliability.
The extent to which the ‘client satisfaction’ evaluations are informative at the program planning
level depends upon the nature and specificity of the questions and the extent to which the data
can be analyzed in a way that guides program improvement. A two tiered approach has been
used by many states, with a broad survey followed by the solicitation of more program
information through targeted evaluations. For example, the three year review of Batshaw
provides information about the satisfaction of a random selection of clients, using a standardized
tool and method. Average ratings and standard deviations for each item and program area2, may
then be used as a basis for more targeted assessments (e.g. to further explore what accounts for
lower ratings for some groups or what accounts for the variability on an item within groups).
Follow-up evaluations require considerable thought and planning regarding:
the specific objectives in collecting the information
whether the desire is to ascertain the scope of a problem or gain more in-depth
understanding of client experiences with a particular service
what components are most important to measure
what elements of service delivery can be modified within the service context (i.e.
Within a continuous improvement framework (e.g. 3 year cycle), evaluations could be part of an
ongoing process that begins with the provincial evaluation as one basis for identifying areas to
target for further assessment.
I have not seen the report, but I believe analyses were conducted by program area or client populations.
Baker L, Zucker PJ, Gross MJ: Using client satisfaction surveys to evaluate and improve services
in locked and unlocked adult inpatient facilities. The Journal of Behavioral Health Services &
Research 1998; 25: 51-63.
Bjorkman, T., & Hansson, L. (2001). Client satisfaction with case management: a study of 10
pilot services in Sweden. Journal of Mental Health, 10 (2): 163-174.
Chapman, M.V., Gibbons, C.B., Barth, R.P., & McCrae, J.S. (2003). Parental Views of In-Home
Services: What Predicts Satisfaction with Child Welfare Workers? Child Welfare, 82 (5):571-96.
Chue, P., Tibbo, P., Wright, E., & Van Ens, J. (2004) Client and Community Services
Satisfaction With an Assertive Community Treatment Subprogram for Inner-City Clients in
Edmonton, Alberta, Can J Psychiatry, 49:621–624.
Florida Department of Children and Families. Mission Support and Performance Team (2001).
Client Satisfaction Survey Report. Available at:
http://www.dcf.state.fl.us/publications/pubs.shtml (select March 2000 Client Satisfaction Survey
Harris, G., & Poertner, J. (1998). Measurement of Client Satisfaction: The State of the Art.
Urbana, IL:Children and Family Research Center.
Huebner , R.A. (2006) Customer Satisfaction Initiative in Kentucky. Department for Community
Based Services (DCBS). http://chfs.ky.gov/NR/rdonlyres/0DCCB168-9D5E-4BD3-BC08-
Huebner, R.A., Jones, B.L., Miller, V.P., Custer, M., & Critchfield, B. (2006). Comprehensive
Family Services and Customer Satisfaction Outcomes. Child Welfare, 85 (4): 691-714.
HDRC (2001). An Integrated Approach to Conducting Client Satisfaction Surveys: Analysis of
requirements and proposal for a client satisfaction measurement program. Prepared for
Evaluation Services Information and Strategic Planning Directorate Quebec Regional Office,
Human Resources Development Canada, March 9, 2001
Kapp, S.A. and Propp, J. (2002). Client Satisfaction Methods: Input from Parents with Children
in Foster Care. Child and Adolescent Social Work Journal, Volume 19 (3): 227-245.
Google search "child welfare" and "client satisfaction" – up to end of pg 18 of search results; Additional searches
using “client satisfaction” and survey; “client satisfaction” and process yielded few additional relevant reports.
Academic literature: Social Work Abstracts “client satisfaction”; Social Service Abstracts “client satisfaction” and
“child welfare”; PsycInfo “client” and “satisfaction” and “child welfare” (only those available electronically through
U of T or McGill)
Kapp, S.A, & Vela, R.H. (2004) The unheard client: Assessing the satisfaction of parents of
children in foster care. Child & Family Social Work 9 (2), 197–206.
Measuring Client Satisfaction: Developing and Implementing Good Client Satisfaction
Measurement and Monitoring Practices, Office of the Comptroller General Evaluation and Audit
Branch, Treasury Board of Canada
Poertner, J., Harris, G., & Joe, S. (1998). Parents with Children in Care: Assessment of Service
Satisfaction. Report accessed at: http://cfrcwww.social.uiuc.edu/pubs/ListResults2.asp.
Rodger, S., Cummings, A., & Leschied, A.W. (2006) Who is caring for our most vulnerable
children? The motivation to foster in child welfare. Child Abuse & Neglect, 30(10): 1129-1142.
Response to OFCO’s Systemic Recommendations 1997-1999: Section 6 (Washington; Response
to 1997 Recommendation #4: Client Surveys).
Schmidt, F., with Strickland, T. (1998). Client Satisfaction Surveying: A Manager's Guide.
Strategic Research and Planning Group of the Canadian Centre for Management Development
Tilbury, C. (2004) The Influence of Performance Measurement on Child Welfare Policy and
Practice. British Journal of Social Work 34, 225-241
Utah’s Home-Based Client Satisfaction Survey Results (2001). Prepared for Richard Anderson
Utah’s Division of Child and Family Services.
Webster Cluster Child Welfare Client Satisfaction Study (1999). Prepared for Webster Cluster
Decategorization/ Empowerment Board, Fort Dodge, Iowa. www.extension.iastate.edu/cd-
Examples of Tools:
%20Client%20Satisfaction%20Survey%20HO66.pdf brief generic survey
http://info.dhhs.state.nc.us/olm/forms/dss/dss-5264.pdf – used in community agencies
Prepared By: Della Knoke, January 15, 2007