Hospital Survey on
Patient Safety Culture
Agency for Healthcare Research and Quality
U.S. Department of Health and Human Services
540 Gaither Road
Rockville, MD 20850
Contract No. 290-96-0004
Westat, Rockville, MD
Joann Sorra, Ph.D.
Veronica Nieva, Ph.D.
AHRQ Publication No. 04-0041
This document is in the public domain and may be used and reprinted without permission except
those copyrighted materials noted for which further reproduction is prohibited without the
specific permission of the copyright holders.
Sorra JS, Nieva VF. Hospital Survey on Patient Safety Culture. (Prepared by Westat, under
Contract No. 290-96-0004). AHRQ Publication No. 04-0041. Rockville, MD: Agency for
Healthcare Research and Quality. September 2004.
The Agency for Healthcare Research and Quality (AHRQ) is the lead Federal agency charged
with conducting and supporting research to improve patient safety and health care quality for all
Americans. AHRQ’s goal is to support a culture of safety and quality improvement in the
Nation’s healthcare system that will help speed the adoption of research findings into practice
To that end, AHRQ has sponsored the development of this survey on patient safety culture. This
tool is useful for assessing the safety culture of a hospital as a whole, or for specific units within
hospitals. Moreover, the survey can be used to track changes in patient safety over time and to
evaluate the impact of patient safety interventions.
In addition, since 2001, AHRQ has supported a wide range of other patient safety research to
develop innovative approaches to collecting, analyzing, and reporting patient safety data;
understanding the impact of working conditions on patient safety, including the sciences of
ergonomics and human factors; and fostering the use of information technology to reduce
As a result, many other patient safety products and tools also are available from the Agency.
These can be found on AHRQ’s Website, at http://www.ahrq.gov, or by calling AHRQ’s
publications clearinghouse, at 1-800-358-9295.
I hope that this survey, as well as AHRQ’s other patient safety tools, will be useful in helping
you to ensure that your hospital or health care facility is as safe as possible and, as a result, will
help us to achieve the vision that we all share—a health care system in which patients are never
harmed in the course of receiving care.
Carolyn M. Clancy, M.D.
Agency for Healthcare Research and Quality
Part One: Survey User’s Guide
Chapter 1. Introduction ......................................................................................................................1
Development of the Hospital Survey on Patient Safety Culture ...................................................1
Who Should Complete the Survey ................................................................................................2
Safety Culture Dimensions Measured in the Survey.....................................................................2
Modifying or Customizing the Survey ..........................................................................................3
Contents of This Survey User’s Guide ..........................................................................................4
Chapter 2. Getting Started .................................................................................................................7
Determine Available Resources, Project Scope, and Schedule .....................................................7
Plan Your Project ..........................................................................................................................8
Decide Whether To Use an Outside Vendor .................................................................................8
Form a Project Team .....................................................................................................................9
Chapter 3. Selecting a Sample ...........................................................................................................11
Determine Whom To Survey ........................................................................................................11
Determine Your Sample Size ........................................................................................................12
Compile Your Sample List ............................................................................................................12
Review and Fine-Tune Your Sample ............................................................................................13
Chapter 4. Determining Your Data Collection Methods ...................................................................15
Decide How Surveys Will Be Distributed and Returned ..............................................................15
Establish Points-of-Contact Within the Hospital ..........................................................................16
Chapter 5. Establishing Data Collection Procedures .........................................................................17
Maximize Your Response Rate .....................................................................................................17
Track Responses With or Without Identifiers ...............................................................................18
Assemble Survey Materials ...........................................................................................................20
Track Responses and Response Rates ...........................................................................................22
Chapter 6. Conducting a Web-based Survey .....................................................................................25
Consider the Pros and Cons of Web-based Surveys .....................................................................25
Design and Pretest the Web-based Survey ....................................................................................26
Develop a Web-based Data Collection Plan .................................................................................28
Chapter 7. Preparing and Analyzing Data, and Producing Reports ..................................................31
Identify Complete and Incomplete Surveys ..................................................................................31
Code and Enter the Data ................................................................................................................32
Check and Electronically Clean the Data ......................................................................................33
Analyze the Data and Produce Reports of the Results ..................................................................33
Part Two: Survey Materials
The Survey Form (taken from electronic file) ...................................................................................41
Safety Culture Dimensions and Reliabilities .....................................................................................45
Sample Page from Survey Feedback Report (taken from electronic file) .........................................49
Appendix A. Pilot Study for the Hospital Survey on Patient Safety Culture:
A Summary of Reliability and Validity Findings .......................................................53
Appendix B. Safety Culture Assessment: A Tool for Improving Patient Safety in
Healthcare Organizations ...........................................................................................67
Chapter 1. Introduction
Patient safety is a critical component of health care quality. As health care organizations
continually strive to improve, there is a growing recognition of the importance of establishing a
culture of safety. Achieving a culture of safety requires an understanding of the values, beliefs,
and norms about what is important in an organization and what attitudes and behaviors related to
patient safety are expected and appropriate. A definition of safety culture is provided below.
Safety Culture Definition
The safety culture of an organization is the product of individual and group values,
attitudes, perceptions, competencies, and patterns of behavior that determine the commitment
to, and the style and proficiency of, an organization’s health and safety management.
Organizations with a positive safety culture are characterized by communications founded on
mutual trust, by shared perceptions of the importance of safety, and by confidence
in the efficacy of preventive measures.
Organising for Safety: Third Report of the ACSNI (Advisory Committee on the Safety of Nuclear Installations)
Study Group on Human Factors. Health and Safety Commission (of Great Britain). Sudbury, England: HSE Books,
Development of the Hospital Survey on
Patient Safety Culture
Recognizing the need for a measurement tool to assess the culture of patient safety in health
care organizations, the Medical Errors Workgroup of the Quality Interagency Coordination Task
Force (QuIC) sponsored the development of a hospital survey focusing on patient safety culture.
Funded by the Agency for Healthcare Research and Quality (AHRQ), the Hospital Survey on
Patient Safety Culture was developed by a private research organization under contract with
To develop this survey, the researchers conducted a review of the literature pertaining to
safety, accidents, medical error, error reporting, safety climate and culture, and organizational
climate and culture. In addition, the researchers reviewed existing published and unpublished
safety culture surveys and conducted in-person and telephone interviews with hospital staff. The
survey was pretested with hospital staff to ensure the items were easily understood and relevant
to patient safety in a hospital setting. Finally, the survey was pilot tested with more than 1,400
hospital employees from 21 hospitals across the United States. The pilot data were analyzed,
examining item statistics and the reliability and validity of the safety culture scales, as well as the
factor structure of the survey through exploratory and confirmatory factor analyses. Based on the
analysis of the pilot data, the survey was revised by retaining only the best items and scales. The
resulting Hospital Survey on Patient Safety Culture has sound psychometric properties for the
included items and scales.
The survey and its accompanying toolkit materials are designed to provide hospital officials
with the basic knowledge and tools needed to conduct a safety culture assessment, along with
ideas for using the data. Part One of the Hospital Survey presents issues inherent to the data
collection process and the overall project organization. Part Two includes the survey form,
followed by a separate overview of the included items, grouped according to the safety culture
dimensions they are intended to measure and the reliability findings derived from the pilot data.
A sample page from the Survey Feedback Report also is provided. Appendix A summarizes the
development of the pilot survey. Appendix B is a journal article on the uses of safety culture
assessments and their place in the clinical treatment environment.
Who Should Complete the Survey
The Hospital Survey on Patient Safety Culture examines patient safety culture from a
hospital staff perspective. The survey can be completed by all types of hospital staff—from
housekeeping and security to nurses and physicians. The survey is best suited for the following,
Hospital staff who have direct contact or interaction with patients (clinical staff, such as
nurses, or nonclinical staff, such as unit clerks);
Hospital staff who may not have direct contact or interaction with patients but whose
work directly affects patient care (staff in units such as pharmacy, laboratory/pathology);
Hospital-employed physicians who spend most of their work hours in the hospital
(emergency department physicians, hospitalists, pathologists); and
Hospital supervisors, managers, and administrators.
Note that some physicians have privileges at hospitals but are not hospital employees and
may spend the majority of their work time in nonhospital, outpatient settings. Consequently,
these types of physicians may not be fully aware of the safety culture of the hospital and
generally should not be asked to complete the survey. Careful consideration should be given
when deciding which physicians to include or exclude from taking the survey.
Safety Culture Dimensions Measured in the Survey
The survey places an emphasis on patient safety issues and on error and event reporting. The
survey measures seven unit-level aspects of safety culture:
Supervisor/Manager Expectations & Actions Promoting Safety (4 items),
Organizational Learning—Continuous Improvement (3 items),
Teamwork Within Units (4 items),
Communication Openness (3 items),
Feedback and Communication About Error (3 items),
Nonpunitive Response to Error (3 items), and
Staffing (4 items).
In addition, the survey measures three hospital-level aspects of safety culture:
Hospital Management Support for Patient Safety (3 items),
Teamwork Across Hospital Units (4 items), and
Hospital Handoffs and Transitions (4 items).
Finally, four outcome variables are included:
Overall Perceptions of Safety (4 items),
Frequency of Event Reporting (3 items),
Patient Safety Grade (of the Hospital Unit) (1 item), and
Number of Events Reported (1 item).
Modifying or Customizing the Survey
The survey was developed to be general enough for use in most hospitals. You may find,
however, that the survey uses terms that are different from those used in your hospital, or that
your hospital’s management would like to ask hospital staff additional questions about patient
safety. Anticipating the need for some modification or customization of the survey, the survey
form and feedback report templates are available as modifiable electronic files at the AHRQ
Website (www.ahrq.gov/qual/hospculture/). We recommend making only those changes to the
survey that are absolutely necessary, because changes may affect the reliability and overall
validity of the survey, and may make comparisons with other hospitals difficult.
Here are some suggestions regarding modifications to the survey:
Modifying background items. The survey begins with a background question about the
respondent’s primary work area or unit. The survey ends with some additional
background questions about such topics as staff position, tenure in the organization, and
work hours. Your hospital may wish to modify the responses to these background
questions so they are tailored to reflect the names of your hospital’s work units, staff
position titles, and the like.
Use of the term “unit.” The survey places most of its emphasis on safety culture at the
unit level, because staff will be most familiar with safety culture at this level. There also
is a section that pertains to safety culture across the hospital as a whole. If you work in a
smaller hospital that does not have differentiated units with multiple staff members in
each unit, you may want to consider modifying some of the instructions and/or items in
the survey from a focus on the “unit” to a focus on the hospital as a whole. The term
“unit” also may be replaced by an equivalent term, such as “department,” if it suits your
hospital (just be sure to make this replacement everywhere it applies in the survey).
Adding items. If your hospital would like to add additional items to the survey, we
recommend adding these items toward the end of the survey (after “Section G: Number
of Events Reported”).
Making the survey shorter or removing items. Although the survey takes only about
10 to 15 minutes to complete, your hospital may want to administer a shorter survey with
fewer items. Part Two of the Hospital Survey on Patient Safety Culture includes an
overview of the safety culture dimensions assessed in the survey and the reliability
figures for each dimension. Delete the dimensions that your hospital is not interested in
assessing (be sure to delete all of the items associated with those dimensions). In this
way, your hospital’s results on the remaining safety culture dimensions still can be
compared to other hospitals that use the survey.
Adapting the survey for Web-based data collection. We recommend using a paper-
based survey data collection methodology to make sure you obtain the highest possible
response rates. Despite the probability of lower response rates, however, your hospital
may decide that it is more feasible and logistically advantageous to do data collection
with a web-based survey. Web-based surveys have a wide range of design features and
can involve different data collection procedures, so please be sure to read Chapter 6:
Conducting a Web-based Survey, for guidelines on how to adapt the Hospital Survey for
this type of data collection.
Contents of This Survey User’s Guide
This Survey User’s Guide is designed to assist you in conducting your own hospital survey
on patient safety. This guide provides a general overview of the issues and major decisions
involved in conducting a survey and reporting the results. The guide includes the following
Chapter 2—Getting Started. Chapter 2 provides information on planning the project,
outlines major decisions and tasks in a task timeline, and discusses hiring a vendor and
forming a project team.
Chapter 3—Selecting a Sample. Chapter 3 describes the process of selecting a suitable
sample group from your staff.
Chapter 4—Determining Your Data Collection Methods. Chapter 4 outlines decisions
about how surveys will be sent and returned and discusses the importance of establishing
points-of-contact within the hospital.
Chapter 5—Establishing Data Collection Procedures. Chapter 5 suggests techniques
for maximizing your response rate, discusses the importance of protecting confidentiality,
and outlines survey materials to be assembled.
Chapter 6—Conducting a Web-based Survey. Chapter 6 presents the pros and cons of
using a Web-based survey approach to data collection and outlines special considerations
that must be taken into account.
Chapter 7—Preparing and Analyzing Data, and Producing Reports. Chapter 6
discusses the steps needed to prepare the data and analyze the responses and provides
suggestions for producing feedback reports.
Chapter 2. Getting Started
Before you begin, it is important to understand the basic tasks involved in a survey data
collection process and decide who will manage the project. This chapter is designed to guide you
through the planning stage of your project.
Determine Available Resources, Project Scope, and Schedule
Two of the most important elements of an effective project are a clear budget to determine
the scope of your data collection effort and a realistic schedule. Therefore, to plan the scope of
the project, you need to think about your available resources. You may want to ask yourself the
How much money and/or resources are available to conduct this project?
Who within the hospital is available to work on this project?
When do I need to have the survey results completed and available?
Do we have the technical capabilities to conduct this project in the hospital, or do we
need to consider using an outside company or vendor for some or all of the tasks?
You should read this entire Survey User’s Guide before deciding on a budget and the
project’s scope, because this document outlines the tasks that need to be accomplished. Each task
has interrelated cost and scheduling implications to consider. Use the following guidelines to
determine your budget and plan:
Consider all of the project tasks and whether the tasks will be performed in-house or
through an outside company or vendor.
Develop initial budget and scheduling estimates and revise as needed given your
available resources, existing deadlines, and project implementation decisions.
Include a cushion for unexpected expenses, and account for tasks that may take longer
Plan Your Project
Use the timeline below as a guideline in planning the tasks to be completed. Plan for at least
10 weeks from the beginning of the project to the end of data collection. Add a few more weeks
for data cleaning, analysis, and report preparation. If you are conducting a web survey, add
several weeks to the beginning of the timeline to allow time for adapting the survey to a web-
based format, and pretesting to ensure that the web version works properly before beginning data
Table 1. Task Timeline for Project Planning
Task Timeline for Project Planning
Getting Started - Ch. 2
Determine Available Resources, Project Scope & Schedule
Decide Whether To Use an Outside Vendor (& Select Vendor)
Form a Project Team
Selecting A Sample - Ch. 3
Determine Whom To Survey
Determine Your Sample Size
Compile Your Sample List
Review and Fine-Tune Your Sample
Determining Your Data Collection Methods - Ch. 4
Decide How Surveys Will Be Distributed and Returned
Establish Points-of-Contact Within the Hospital
Developing Your Data Collection Procedures - Ch. 5
Decide Whether To Track Responses Through Identifiers
Assemble Survey Materials (develop and print materials)
Send Prenotification Letter
Send First Survey
Track Responses and Response Rates EEnd of
Send First Reminder ddata
Send Second Survey ccollection
Decide Whether to Use an Outside Vendor
You may want to consider using an outside company or vendor either to handle your survey
data collection tasks or to analyze the data and produce reports of the results. Hiring a vendor
may be a good idea for several reasons. Working with an outside vendor may help ensure
neutrality and the credibility of your results. In addition, since confidentiality of survey
responses is a typical concern, staff may feel their responses will be more confidential when they
are returned to an outside vendor. Vendors typically also have experienced staff to perform all of
the necessary activities and the facilities and equipment to handle the tasks. A professional and
experienced firm may be able to provide your hospital with better quality results in a more timely
manner than if you were to do the tasks yourself.
On the other hand, the use of a vendor may add too much additional expense to your project.
If your hospital system has a corporate headquarters, you may want to find out if the
headquarters staff is capable of and interested in conducting a survey of your hospital and
analyzing the data for you. Your hospital system may be interested in conducting a system-wide
survey effort; not just in your hospital. Moreover, your hospital’s staff may feel more
comfortable about the confidentiality of their responses if surveys can be returned to a corporate
If you are considering hiring an outside vendor, the following guidelines may help you to
select the right one:
Look for a vendor with expertise in survey research. Local universities may have their
own survey research centers or be able to refer you to vendors. You also may inquire
within your hospital or hospital system to find out if particular vendors have been used
before for survey data collection, analysis, and reporting.
Gain an understanding of the vendor’s capabilities and strengths, so you can match them
to the needs of your project. Determine whether the vendor can conduct all of the project
components you want them to handle. Some vendors will be able to handle your feedback
report needs; others will not.
Provide potential vendors with a written, clear outline of work requirements. Make tasks,
expectations, deadlines, and deliverables clear and specific—mention all documentation,
files, data sets, and other deliverables you expect to receive. Then, ask each vendor to
submit a short proposal describing the work they plan to conduct, the qualifications of
their company and staff, and details regarding methods and costs.
Meet with the vendor to make sure you will be able to work well together.
Once you have chosen a vendor, institute monitoring, supervision, and problem-
Form a Project Team
Whether you conduct the survey in-house or through an outside vendor, you will need to
establish a project team responsible for planning and managing the project. Your project team
may consist of one or more individuals from your own hospital staff, outsourced vendor staff, or
The Project Team’s Responsibilities
The project team is responsible for a variety of duties—either for conducting them in-house
or for monitoring them if an outside vendor is hired. Highlights of some of these project duties
Planning and budgeting—Determining the scope of the project based on available
resources, planning project tasks, and monitoring the budget.
Selecting a sample—Determining how many and which staff to survey.
Establishing department-level contact persons—Contacting department- and unit-level
points-of-contact in the hospital to support survey administration, maintain open
communication throughout the project, and provide assistance.
Preparing survey materials—Printing surveys, preparing postage-paid return envelopes
and mailing labels, and compiling these components for your survey mailout.
Distributing and receiving survey materials—Distributing prenotification letters,
surveys, and nonresponse postcards; and handling receipt of completed surveys.
Tracking survey responses and response rates—Monitoring who has returned the
survey and who should receive followup materials.
Handling data entry, analysis, and report preparation—Reviewing survey data for
respondent errors and data entry errors in electronic data files, conducting data analysis,
and preparing a report of the results.
Coordinating with and monitoring an outside vendor (optional)—Outlining the
requirements of the project to solicit bids from outside vendors, selecting a vendor,
coordinating tasks to be completed in-house versus by the vendor, and monitoring
progress to ensure that the necessary work is completed and deadlines are met.
The remainder of this Survey User’s Guide contains the information necessary to collect
survey data using an in-house project team. If you decide to hire a vendor, you may use the
information as a resource to facilitate communication with your vendor about the various project
tasks and decisions that will be required.
Chapter 3. Selecting a Sample
The population from which you select your sample will be staff in your hospital or hospital
system. You either can administer surveys to everyone in your population of hospital staff, or
you can administer surveys to a subset or sample of your population. Although surveying all staff
may seem simple or most desirable, the additional time and resources required may eliminate
that option. If you decide to administer surveys to all hospital staff, this chapter is not applicable.
If you are uncertain or have decided that you will administer surveys to a sample of
hospital staff, however, this chapter tells you how to select your sample.
When you select a sample, you are selecting a group of people that closely represents the
population so that you can generalize your sample’s results to the broader population. To select
your sample, you need to determine which hospital staff you want to survey and the number of
staff that need to be surveyed.
Determine Whom To Survey
All staff in your hospital or hospital system represent your population. From this population,
you may want to survey staff from every area of the hospital, or you may want to focus on
specific units, staffing categories, or staffing levels. There are several ways to select a sample
from a population. Several types of samples are described below. Select the type that best
matches your needs, taking into account what is practical given your available resources.
Staff in particular staffing categories. You may be interested only in surveying staff in
specific staffing categories, such as nursing. With this approach, you may select all staff
within a staffing category or select a subset of the staff. This approach alone, however,
may not be sufficient to represent the views of all staff in the hospital.
Staff in particular areas/units. You may want to survey staff in particular hospital areas
or units, such as OB/GYN, Emergency, Pharmacy, etc. The list below presents three
examples of ways staff can be selected using this approach, listed in order from most to
least representative of the entire hospital population:
A subset of staff from all areas/units (most representative).
All staff from some areas/units.
A subset of staff from some areas/units (least representative).
A combined approach. If possible, we recommend surveying staff using a combination
of the two sample types just described. For example, you may be interested in surveying
all nurses (a staffing category), but only a subset of staff from every hospital area
(excluding nursing). Using a combination of sample types allows you either to
oversample or selectively sample certain types of staff in an attempt to thoroughly
represent the diversity of hospital staff.
Determine Your Sample Size
The size of your sample will depend on whom you want to survey and your available
resources. While your resources may limit the number of staff you can survey, the more staff you
survey, the more likely you are to adequately represent your population.
To determine your sample size, think about your budget and how many responses you want
to receive (i.e., your response goal). Because not everyone will respond, you can expect to
receive completed surveys from about 30 percent to 50 percent of your sample. Therefore, to
reach your response goal, your sample size should be at least twice the number of responses
you want to receive. If the number of responses you eventually want to achieve is 200
completed surveys, be prepared to administer surveys to at least 400 staff members (an example
of sample selection is presented at the end of this chapter).
Your budget may determine the number of staff you can sample. To reach an adequate
number of responses, you will need to send initial surveys as well as followup surveys to those
who do not respond to the first survey. Your budget also should take into consideration
additional costs for materials such as envelopes and postage, if you are mailing surveys.
Compile Your Sample List
After you determine whom you want to survey and your sample size, compile a list of the
staff from which to select your sample. When compiling your sample list, include several items
of information for each staff member:
First and last name,
Internal hospital mailing address, or home or office addresses if surveys will be mailed,
E-mail address (if conducting a Web-based survey or using e-mail to send prenotification
letters, web survey hyperlinks, or reminders),
Hospital area/unit, and
Staffing category or job title.
If you are selecting ALL staff in a particular staffing category, hospital area, or unit, no
sampling is needed; so simply compile a list of all these staff. If you are selecting a subset or
sample of staff from a particular staffing category, hospital area, or unit, you will need to use a
method such as simple random sampling or systematic sampling.
Simple Random vs. Systematic Sampling
Simple random sampling involves selecting staff randomly, such that each staff member has an
equal chance of being selected. Systematic sampling essentially involves selecting every Nth person
from a population list. For example, if you have a list of 100 names in a particular group and need
to select 25 to include in your sample, you would choose to begin at a random point on the list and
then select every 4th staff member to compile your sample list. Thus, if you began with the first
person on the list, you would select the 4th, 8th, 12th, 16th, etc. staff member, up to the 100th staff
member, compiling a total of 25 names in your sample list.
Review and Fine-tune Your Sample
Once you have compiled your sample list, review the list to make sure it is appropriate to
survey each staff member on the list. To the extent possible, ensure that this information is
complete, up-to-date, and accurate. Points to check for include:
Staff on administrative or extended sick leave,
Staff who appear in more than one staffing category or hospital area/unit,
Staff who have moved to another hospital area/unit,
Staff who no longer work at the hospital, and
Other changes that may affect the accuracy of your list of names or mailing addresses.
If you believe there are certain staff who should not receive the survey or that your records
are not complete, selectively remove people from the list. If you remove someone from the list,
add another staff member in her/his place.
Revising Your Sample
You may review your list and realize that you would like to survey an additional staffing
category or hospital area that was not part of your initial sample. In this case, you will need to
add to your list.
Selecting a Sample—An Example
Suppose you work in a 200-bed hospital with 1,400 staff members. Nursing is the single
largest staffing category, with 1,000 staff. Smaller hospital areas or units have a combined
total of 100 non-nursing staff, and larger hospital areas or units have a combined total of 300
Determine Whom To Survey. You decide to survey a sample of nurses, all non-nursing
staff from smaller hospital areas or units, and all non-nursing staff from the larger hospital
areas or units. You therefore choose a combination approach to select your sample.
Determine Your Sample Size. Your response goal is 450 completed surveys, and this goal
fits within your budget. Therefore, your sample size will be 900 staff members (expecting a
50% response rate).
Compile Your Sample List. Your final sample list of 900 staff members consists of:
1. Nursing—From the total of 1,000 nurses, a sample of 500 nurses is selected (250 expected
completes). The sample was selected as follows:
a) A list of the 1,000 nurses was produced.
b) Using systematic sampling, every other nurse on the list was selected to be included in
the sample until 500 names were selected (1,000 total nurses divided by 500 nurses
needed = every 2nd nurse).
2. Smaller hospital areas or units—All 100 non-nursing staff (50 expected completes).
3. Larger hospital areas or units—All 300 non-nursing staff (150 expected completes).
Review and Fine-Tune Your Sample. When verifying the contact information for the initial
sample of 900 staff, you found that 25 staff no longer work for the hospital and should be
dropped from the list. You may or may not want to replace these names. To replace the names,
randomly select additional staff from the same staffing categories or hospital areas as the staff
who were dropped.
Chapter 4. Determining Your Data Collection Methods
Once you have determined your available resources, project scope, and timeline; established
a project team; and selected your sample (or populations to include), you need to decide how to
collect the data. This chapter guides you through decisions about data collection methods. The
methods you choose for sending and returning surveys affect how your staff views the
confidentiality of their responses, and this will impact your overall survey response rate. To
achieve maximum response rates among all hospital staff, we recommend using a paper-based
data collection method. Current research and evidence shows that Web-based surveys have lower
response rates than paper surveys (Groves, 2002), so the procedures outlined in Chapters 4 and 5
assume a paper-based approach. If your hospital is considering a web survey, Chapter 6 presents
the pros and cons and outlines special considerations that need to be taken into account.
Decide How Surveys will be Distributed and Returned
When deciding how surveys will be distributed and returned, consider any previous
experience your hospital has had with surveys. Have previous hospital surveys been mailed to
staff home addresses or administered through the internal mail system at work? Were surveys
returned through contact persons, the internal mail system, to “drop box” locations in the
hospital, or by mail using postage-paid return envelopes? Were surveys returned to a location
within the hospital or to an outside vendor? What were employee survey response rates? If
possible, it is best to use methods that previously were successful in your hospital.
Surveys can be mailed directly to staff home addresses or administered through an internal
mail system at work. If surveys are mailed to homes, you need to verify that you have correct,
updated home addresses of staff members and account for outgoing and return postage in your
budget. If surveys are administered to staff at work, we recommend that you provide explicit
instructions and allow staff to complete the survey during work time to emphasize hospital
administration’s support for the data collection effort.
If your budget is limited, completed surveys can be returned to a designated hospital contact
person through the internal mail system or to survey drop-off locations within the hospital. This
method of returning surveys, however, may raise staff concerns about the confidentiality of their
responses. Rely on your hospital’s past experience with these methods if they have been
If your hospital has had little experience administering employee surveys or you feel there
are confidentiality concerns, it is best to have staff mail their completed surveys directly to an
outside vendor or an address outside the hospital via postage-paid return envelopes. If you do not
use a vendor, consider having the surveys returned to a corporate headquarters address so staff
will be assured that no one at their hospital will see the completed surveys. Remember, if surveys
are returned through the mail, you will need to account for return postage in your budget.
Establish Points-of-Contact Within the Hospital
You will want to establish people in the hospital to serve as points-of-contact for the survey.
Points-of-contact increase the visibility of the survey by showing their support for the effort and
by helping to answer questions about the survey. Decide how many points-of-contact are needed
by taking into account the number of staff and hospital areas or units taking the survey. We
recommend using at least two types of points-of-contact.
A Main Hospital Point-of-Contact
At least one main hospital point-of-contact should be appointed from the project team so that
staff will have one central source for their questions or concerns about the survey. We
recommend including contact information for the main hospital point-of-contact in the
prenotification letter or survey cover letter sent to staff (i.e., phone number, e-mail address,
office number). The main hospital point-of-contact has several duties, including:
Answering questions about survey items, instructions, or processes,
Responding to staff comments and concerns,
Helping to coordinate survey mailing and receipt of completed surveys,
Communicating with outside vendors as needed, and
Communicating with other points-of-contact as necessary.
You may decide to recruit points-of-contact for each hospital area, unit, or staffing category
included in your sample. A unit-level point-of-contact is responsible for promoting and
administering the survey within his/her unit and for reminding unit staff to complete the survey,
without coercing them in any way. An informational letter describing these duties and the overall
survey process should be sent to potential contacts before you begin survey administration. Unit-
level contacts typically are at the management or supervisory level, such as nurse managers,
department managers, or shift supervisors.
Chapter 5. Establishing Data Collection Procedures
Once you have decided how you want the surveys distributed and returned, and have
established at least one main hospital point-of-contact, you need to make several decisions
regarding your data collection procedures. This chapter describes strategies for maximizing your
response rate and outlines methods for tracking responses and collecting data.
Maximize Your Response Rate
The response rate is the total number of complete returned surveys divided by the total
number of eligible staff sampled. Achieving a high response rate is very important for making
valid generalizations about your hospital, based on your survey data collection effort. Surveys
are used to infer something about a particular population. There must be enough survey
respondents to accurately represent the hospital or larger population, before you can legitimately
present your survey results as a reflection of your hospital’s safety culture.
If your response rate is low, there is a danger that the large number of staff who did not
respond to the survey would have answered very differently from those who did respond.
Therefore, an overall response rate of 50 percent or more should be your minimal goal. The
higher the response rate, the more confident you can be that you have an adequate representation
of the staff’s views. To achieve high response rates, we recommend a basic data collection
approach that involves sending a paper survey and the following items, in the order presented:
1. Prenotification letter. Before administering the survey, create a letter signed by your
hospital’s CEO or president on hospital letterhead. The letter will inform all the staff in
your sample that they will be receiving a survey and that hospital administration is in full
support of the survey effort. If an outside vendor is handling the data collection duties,
use the letter as an opportunity to introduce the vendor.
2. First survey. About 1 week later, send the survey to all staff in your sample group.
Include a supporting cover letter similar in content to the prenotification letter and
instructions for completing and returning the survey. Include preaddressed postage-paid
envelopes to make it easy for respondents to return their surveys.
In the cover letter, or on the survey form, ask staff to complete the survey within 7 days,
but do not print an actual deadline date on the letter or survey. Sometimes data
collection schedules get delayed, and you do not want to reprint letters or surveys
because they are outdated. In addition, sometimes people will not complete a survey if
they notice that it is beyond the deadline date.
3. First reminder postcard or letter. Approximately 2 weeks after sending the survey,
send a reminder postcard or letter to the sample group thanking those who have already
responded and reminding others to please respond. The reminders can be sent to
everyone, or only to those who have not responded.
4. Second survey. Two weeks after sending the first reminder, send a second survey to
nonrespondents, including a cover letter thanking those who have already responded and
reminding others to please complete the second survey. If you are not using identifiers to
track responses, it may be necessary to send a second survey to everyone in your sample.
5. Second reminder postcard or letter (optional). Approximately 1 week after sending the
followup survey, you may choose to send a second and final reminder.
Additional Ways To Maximize Response Rates
Publicize the Survey. Announce the survey in hospital newsletters, on message boards,
via flyers posted throughout the hospital, and through staff e-mail. Publicizing the survey
both prior to and during survey mailout will help to legitimize the effort and increase your
Use Incentives. Offering incentives can be a good way to increase responses to a survey
because respondents often ask, “What’s in it for me?” You may want to offer individual
incentives, such as a raffle for cash prizes or gift certificates, or you can offer group
incentives, such as catered lunches for units with at least a 75 percent response rate. Be
creative and think about what would motivate your staff to complete the survey.
Track Responses With or Without Identifiers
To ensure confidentiality, respondents are asked not to provide their names on the completed
survey forms. It is sometimes helpful, however, to include a number or code known as an
identifier, on your surveys. Identifiers typically are used to track whether individuals have
responded to the survey and/or to track the particular unit or hospital associated with a completed
survey. The advantage of using identifiers is that they allow you to track responses so you:
Send reminders and followup materials only to nonrespondents, saving on costs;
Eliminate the possibility of someone completing more than one survey; and
Calculate response rates at the unit or hospital level (hospital-level response rates are
important when administering the survey in several hospitals at the same time).
On the other hand, there are a number of disadvantages to using identifiers. Some
respondents will be so concerned about the confidentiality of their responses that they will de-
identify their own surveys by removing or marking out their identification number or code.
Respondents also may refuse to complete the survey if they are concerned that their response
will be tracked, especially if the data will be collected and analyzed within the hospital (rather
than by an outside vendor). Furthermore, the inclusion of any type of identifier on surveys
mandates a very strict adherence to procedures protecting the confidentiality of the information
linking individual staff to the identification numbers or codes.
Guidelines for Using Identifiers
Following careful procedures for using identifiers is critical to maintaining trust that
survey responses are confidential and answers will not be linked back to individual staff.
If you decide to use identifiers, you must ensure that only key project personnel have access
to information linking individual names or groups to the identification numbers or codes.
Do not use group identifiers (e.g., for a particular unit or staffing category) if there are
fewer than 10 staff in a group because individual responses are more identifiable in a small
group. Do not use obvious identifiers (e.g., do not use “East3”). At the conclusion of data
collection, information linking names to identifiers should be destroyed.
Reply Postcards with Identifiers
An alternative to using identifiers printed on surveys is to include in the survey materials a
postage-paid reply postcard that has an identifying number or code (with no identifiers on the
actual surveys). In the sample reply postcard, the number “155” is one respondent’s individual
identification number. When respondents return their completed surveys, they are instructed to
return the reply postcard separately, which notifies you that the staff member with the particular
individual identification number has returned the survey and therefore does not need to be sent
reminder materials. Using a separate postcard ensures the anonymity of survey responses
because there is no way to link any completed survey answers to a particular individual. The
main obstacle to this approach is that it is not an exact means of tracking responses, because
there may be people who send in their surveys but not their postcard, and vice versa.
Sample Reply Postcard with an Identifier:
When you complete and return your survey, please return this postcard
separately to let us know you have responded. Thank you very much for
your time and participation.
I am mailing this postcard to let you know that I
have returned my survey in a separate envelope.
If you decide it is best not to use any identifiers, reminder letters and followup surveys must
be sent to all staff with instructions to disregard the second survey if the first survey was
completed and mailed. You may receive phone calls from respondents who completed and
returned their survey, wondering why they received followup materials, but you can instruct
them to disregard the materials and remove their names from further followup mailings.
Assemble Survey Materials
The following materials will need to be assembled in preparation for the survey mailing. To
improve response rates, it is advantageous to personalize outer envelopes and letters (e.g.,
addressed to “Dear John Doe”). Care should be taken, however, to prevent names from
appearing on the actual survey forms.
Publicity materials (optional). Depending on how extensively you survey your staff,
you may want to post informational flyers or send e-mail notices publicizing the survey.
Unit-level point-of-contact letter (optional). You may want to send a letter to unit-level
contact persons describing the purposes of the survey and explaining their role in the
survey effort. The letter should be printed on official hospital letterhead, dated with
month/year, signed by the hospital CEO, and should provide background information and
instructions regarding the survey.
Prenotification letter. The prenotification letter should describe the purposes of the
survey and it should contain the completion instructions. This letter also should be on
official hospital letterhead, signed by the hospital CEO or president.
Cover letter. The cover letter should be on official hospital letterhead and is to be
included with the first packet of survey materials. Include the following points:
Why the hospital is conducting the survey and how staff responses will be used,
Which hospital staff were selected to be surveyed (e.g., all staff, nursing staff, all
clinical staff, a random sample of staff, etc.),
How much time is needed to complete the survey,
Confidentiality or anonymity assurances,
Suggested reply timeframe and how to return completed surveys,
Incentives for which staff will be eligible, if they respond (Optional), and
Contact information for the main hospital point-of-contact.
These points can be summarized in a few short paragraphs. For example:
“The enclosed survey is part of our hospital’s efforts to better address patient
safety. The survey is being distributed to (sample description). It will take about
10 to 15 minutes to complete and your individual responses will be kept
confidential. Only group statistics will be prepared from the survey results.
Please complete your survey and return it WITHIN THE NEXT 7 DAYS. (Do
not provide a specific date) When you have completed your survey, please
(provide return instructions). (Optional incentive) As a way of thanking staff
members for their participation, respondents will receive (describe incentive).
Please contact [contact name and job position] if you have any questions
[provide phone number and email address]. Thank you in advance for your
participation in this important effort.”
Reminder postcards or letters. A reminder postcard or letter is sent to nonrespondents
after the first survey administration, asking them to please complete and return their
Surveys. If you are not tracking responses and plan to send second surveys to everyone
in your sample, print at least twice the number of surveys as staff in your sample. If you
are tracking responses and will send only second surveys to nonrespondents, you may
print fewer surveys. For example, if your hospital’s survey response history typically
results in a 20 percent response to the first survey, you could print 80 percent more
surveys than were distributed initially, to prepare for the followup survey mailing—800
staff multiplied by .80 equals 640, for a total estimate of 1,440 printed surveys needed.
Labels. You will need labels for the outside of each survey mailing envelope, addressed
either to the home address or internal hospital mailing address of each staff member in
your sample. Return address labels may be used on return envelopes. Labels also may be
used to place identifiers onto surveys.
Envelopes. You will need a set of outer envelopes to send the surveys and a set of return
envelopes for the return of completed surveys. Preprint the return address on the return
envelopes (or use labels). To make sure that the cover letter, survey, and return envelope
fit without folding or bending, use slightly larger outer envelopes. Calculate the number
of envelopes based on the number of initial and followup surveys to be sent.
Postage. If surveys are to be sent through the mail, weigh the outgoing packet of survey
materials to ensure you have adequate postage. If surveys are to be returned through the
mail, weigh the survey and the return envelope to ensure you have adequate postage on
the return envelopes. Calculate the total amount of postage based on the number of initial
and followup surveys to be mailed.
Track Responses and Response Rates
You, or your vendor, will need to follow survey response rates by tracking completed
surveys as they are returned. Tracking returned surveys can be done very simply with a
spreadsheet software program. If you are planning to use survey identifiers, create a separate row
for each individual identifier. Create columns across the top of your spreadsheet for the date the
initial survey is distributed, the date the returned survey is received (so respondents can be
excluded from followup reminders), as well as the distribution dates for any first reminders,
second surveys, or second reminders. Compile response rates for each round of followup
contacts—at the time of the first reminder, the second survey, and the final reminder—to track
your response progress.
Closing Out Data Collection
To ensure you receive as many responses as possible, plan to hold open data collection for at
least 2 weeks after the second survey or second followup reminder is sent. Referring back to the
project timeline on page 8, allow 8 weeks or more from the prenotification letter to the your
data collection period closeout. There always will be a few respondents who return the survey
very late, so you may want to take this into consideration and hold the data collection period
open longer. Once the established cutoff date arrives, close out data collection and begin
preparing the data for analysis as described in the following chapter.
Calculating Your Response Rate
To calculate your survey response rate, divide the number of completed and returned surveys
(numerator) by the number of surveys sent (denominator). This equation often needs adjusting,
however. The number of surveys “returned” depends on the criteria you use to define a
“completed” survey. The number of surveys “sent” depends on how many staff actually receive
their survey. If a survey is returned due to a bad address or because a selected staff member no
longer works at the hospital, the case is ineligible for inclusion and would be subtracted from the
denominator. We recommend using the following formula for an adjusted response rate:
Number of complete, returned surveys
Number of surveys distributed – (ineligibles + incomplete surveys)
Chapter 6. Conducting a Web-based Survey
As mentioned earlier in this guide, current research and evidence shows that Web-based
surveys have typically lower response rates than paper-based surveys (Groves, 2002). It is
important to reiterate that low response rates will limit your ability to generalize your results.
However, because Web-based surveys do have certain advantages, your hospital may be
considering this type of approach. To help you decide which approach is best suited to your
situation, or if a combination approach is warranted, this chapter presents the pros and cons of
conducting a Web-based survey. The chapter also outlines special considerations that need to be
taken into account and presents guidelines that will help you make the most of a Web-based
survey, should you decide to take that approach.
A major factor, of course, is cost. While the costs of a Web-based survey may seem less
because there are no printing, postage or data entry expenses, do not overlook the labor costs
associated with Web survey programming and testing. At the same time, a Web-based approach
generally tends to be more economical as the survey sample size becomes larger. Surveys
sampling only a few hundred individuals are likely to be more cost-effective using a paper-based
survey approach. Cost, however, is but one of the many factors that need to be considered in
deciding which approach to take.
Consider the Pros and Cons of Web-based Surveys
There are a number of pros and cons to conducting Web-based surveys. The relative weight
given to each of these advantages and disadvantages, and the final decision on whether to
conduct a web survey, will be determined by your hospital’s specific circumstances, capabilities,
resources, and goals.
The primary advantages to Web-based surveys are:
Simpler logistics. Web-based surveys can be virtually paperless, making them easier in
some ways to manage. There are no surveys to print; no handling of letters, labels,
envelopes, or postage; and there are no completed paper surveys to manage.
No need for data entry and minimal need for data cleaning. Web-based surveys
typically are programmed to prevent invalid responses. Moreover, the responses are
automatically copied to a database, so the need for separate data entry is eliminated and
the need for data cleaning is greatly reduced.
Potential for faster data collection. While not always the case, Web-based surveys can
facilitate shorter data collection periods. Web-based surveys involving e-mail notification
and follow-up correspondence are received immediately after being sent, so the time
interval between survey administration steps often is reduced.
There also are several disadvantages to web surveys:
Time and resources needed for development and testing. Time and resources are
needed to program a Web-based survey so that it meets acceptable standards of
functionality including: usability requirements, log-in usernames and/or passwords, and
the convenience of allowing respondents the option of saving their responses and
returning later to finish the survey. Of equal importance are security safeguards for
protecting the data. In addition, the Web-based survey must be pretested thoroughly to
ensure that it works properly and that the resulting data set is established correctly.
Limited access to the internet or e-mail. A Web-based survey should be accessible to
all the individuals in your sample group. Barriers to internet service and e-mail
accessibility issues will lead to poor response rates. Many hospitals have only a limited
number of internet-connected computers. If computers are located centrally, staff may be
concerned about the privacy of their responses. In addition, all staff may not have e-mail
access or may not access their e-mail regularly. In such cases, e-mail notification or e-
mail messages with hyperlinks to the survey website may not be effective instruments for
getting respondents to complete the survey.
Individual differences in computer and internet use. The intensity of computer and
internet usage is the most important predictor of cooperation in a Web-based survey
(Groves, 2002). There are likely to be staff among your sample group who are not
computer or internet savvy, and, therefore, may not respond to the survey if this is their
only means of accessing the survey.
Design and Pretest the Web-based Survey
If you decide after weighing the pros and cons of conducting a Web-based survey that this is
the approach your hospital will take, there are a number of web survey design aspects to
consider. If your hospital plans to use off-the-shelf commercial software, rather than having a
vendor design and develop a custom Web-administered survey, assess the various software
applications available to you and make your selection on the basis of capabilities and which
product best handles the many features and recommendations we outline below.
Web-based Survey Design Features
While research on the best ways to design internet-administered surveys continues to evolve,
current knowledge suggests that the following are elements of a good Web-based survey:
Do not force respondents to answer every question. Permit respondents to continue
completing the survey after choosing not to answer a particular question. Forcing
respondents to answer each question before being allowed to move on to the next
question is something that not only annoys respondents, but is not advisable on the
Hospital Survey on Patient Safety Culture because some respondents may have legitimate
reasons for not answering an item. Forcing a response would cause them to make a wild
guess, rather than an informed answer.
It may be desirable, however, to establish a minimum number or percentage of
completed items in judging a survey “complete.” You may not want respondents to
start the Web-administered survey and submit their final survey answers after completing
only a few items, particularly if you have promised an incentive of some type for
“completing” the survey. Program a certain number of responses or a percentage of the
total items as a minimum number to be completed, before allowing respondents to submit
their final answers (50% complete would be a good starting point, but you could set your
cutoff higher). If the number of completed items falls below your cutoff minimum when
respondents try to submit their data, have a message inform them that they must complete
at least “XX %” of the items to be eligible for the incentive. They can then choose to
“save and exit” (if you provide the option for respondents to reenter the survey) and
complete the minimum required number of items at a later time, or they can choose to go
ahead and “submit” their data with the knowledge they not be eligible for the incentive.
In either case, respondents should be given the option of submitting the incomplete data.
Provide respondents with a means to assess their survey progress. Because it is
difficult to know the length of a Web-based survey, it is helpful for respondents to have
some type of indicator showing their overall progress in the survey, particularly for a
relatively short instrument like the Hospital Survey on Patient Safety Culture. For
example, there could be a graphical progress bar that indicates completion percentages at
various points, for example “Survey is 50% complete.” Other options include
programming the survey as one scrolling page, or allowing respondents to move forward
and backward through a multiple-page format at their convenience, so they may view the
entire length of the survey. If a multiple-page format is used, however, avoid using an
extreme one-question-per-page design.
Include username and/or password protection (Optional). Unless access is restricted
in some way, websites are accessible to the public. Your survey website can be restricted
through the use of a password that is common to all users or groups of users, or through
the use of individual usernames and/or passwords (which requires the use of confidential
identifiers to link individuals to usernames/passwords). While the survey may be
published to part of a restricted company or organization intranet, respondents will be
able to complete the survey more than once unless individual passwords and/or
usernames are established. Screening questions also can be developed to prevent
individuals from participating in the survey multiple times, in the event usernames and/or
passwords are not used. The use of usernames and/or passwords is best accomplished in
conjunction with e-mail survey notifications using hyperlinks to the survey website. This
enables respondents to easily copy and paste their username and/or password directly
from the e-mail. Linking individuals to usernames and/or passwords will complicate the
web development and administrative aspects of the project.
Allow respondents to interrupt their session, save their answers, and complete the
survey at a later time (Optional). Although it takes only 10 to 15 minutes to complete
the Hospital Survey on Patient Safety Culture, respondents may get interrupted while in
the middle the survey and they will not want to readdress parts of the survey they have
already completed. If they choose to leave their internet browser open and the survey idle
until they can come back to it, the respondent may get “timed-out” of their internet
connection and their responses will be lost. To encourage the respondent to complete the
survey at a later time, the stopping point in the survey must be bookmarked and the
completed items have to be stored in computer memory. Provisions must be made in the
programming to allow an individual to re-use the same identification username and/or
password that was established at the initial login to again access the site at a later time for
the purpose of completing the survey. The “save and exit” feature should be accessible at
any point in the survey, but the “submit responses” option should be available only at the
end of the survey.
Allow respondents to print a hard-copy version of the survey and complete it on
paper (Optional). Some respondents will prefer to complete a paper version of the
survey, and providing this option may boost your response rate. It is possible to design
your Web-based survey so it can be printed in paper form, but this functionality must be
tested thoroughly to ensure that it prints properly on different printers. Attention must be
given to line lengths and page lengths in the design of the survey page template.
Moreover, instructions must be provided so the respondents will know where to return
the completed paper surveys, and designated personnel then must enter the responses into
your data set (paper survey data can be entered via the website).
Thoroughly pretest the survey (Essential and Mandatory). Conduct thorough pretests
of the survey using low-end computers with slower internet connections, with various
internet browsers (different iterations of Netscape and Internet Explorer), and with
different display settings (screen resolutions set at 800 x 600 pixels versus 1152 x 864
pixels), etc. This must be done to ensure the survey appears and performs as it should,
despite the different settings and personal preferences selected on individual computers.
Develop a Web-based Data Collection Plan
A Web-based survey data collection plan is very similar to a paper-based data collection plan
in its basic steps. Refer back to Chapters 4 and 5 to identify those elements central to your data
collection methods, and for those collection procedures common to Web-based and paper-based
surveys. Rather than reiterate all the necessary data collection steps in this section, we have
chosen to highlight various steps and identify strategies for conducting those steps that are
unique to Web-based surveys, while offering advice on the best approaches.
A Combination of Web- and Paper-based Survey Methods
If you desire to use a combination of Web-based and paper survey approaches, it is most
economical to first implement the Web-based survey. Later, you can distribute paper
surveys to those members of the sample group who did not respond to the Web-based
Prenotification is correspondence used to notify staff that they have been included in a
sample and are being asked to complete a Web-based survey. Prenotification letters can be sent
electronically, via e-mail, which requires an up-to-date list of the e-mail addresses for those
individuals in your sample group. Alternatively, printed letters can be distributed through
internal hospital mail on letterhead signed by the hospital CEO or president. The main criterion
in deciding which prenotification method to use is staff e-mail use (e.g., whether staff in your
hospital sample all have access to e-mail and read it regularly). If e-mail use is uneven, it is best
to distribute a hard copy prenotification letter through the internal hospital mail. Overall, we
recommend doing prenotification with a hard copy letter—even in conjunction with Web-based
survey data collection—because it is another tool for capturing the respondents’ attention. E-mail
is then used to direct the sample group to the survey instrument. The message should contain a
hyperlink to the website containing the survey form and individual usernames/passwords, if
To further boost response rates, it is advisable to personalize the prenotification letters or e-
mails (i.e., addressed to each respondent, using their first and last name). If e-mail notification is
used, the name or e-mail address in the “From” line should be easily recognizable to staff to
prevent them from mistaking your e-mail for spam and deleting it. For example, you might use
the title and name of the hospital CEO, or another recognized staff executive, to ensure the e-
mail gets opened and read (FROM: “CEO Joe Smith, with Hospital X”).
Follow-up steps improve response rates for Web-based surveys in the same way they help
with paper surveys (Groves, 2002). It is important to follow up with nonrespondents in a timely
manner to ensure the data collection period does not drag on for too long.
If you have the means to conduct all contact steps via e-mail, time intervals between follow-
up steps can be reduced. Consider sending the first e-mail reminder one week after the survey
website link has been e-mailed (rather than using a two-week reminder, as is recommended with
a paper survey). Include the hyperlink to the survey website in each e-mail reminder, along with
the individual’s username and/or password, if applicable. Then send a second e-mail reminder,
one week after the first reminder. A third e-mail reminder can be sent the following week. Send
e-mail reminders only to those who have not responded, or to those who chose to “save and exit”
the survey, but have not returned to the website to complete the survey. Use a larger, colored font
to make the heading of the reminder e-mail more noticeable, and ensure the text of the first and
second reminder messages is slightly different, to capture the recipients’ attention. If you have
not used identifiers and have no way to determine which members of the sample group have
completed the survey, then e-mail reminders must be sent to everyone. It is important in such
cases to include a sentence thanking those who have already completed their surveys and asking
them to disregard the reminder.
We are recommending a combination of printed reminders and electronic reminders—even
for those with the capabilities to conduct all contact steps through e-mail—to ensure that at least
one of the messages reaches each respondent, since individuals respond differently to various
forms of communication. You may decide to send the first and second reminders via e-mail,
followed by a final reminder postcard to be distributed to nonrespondents. The final reminder
postcard could be printed on brightly colored card stock, thanking those who have responded for
their help and asking those who have not responded to please complete the survey in the next 7
If all follow-up reminders are printed on paper and sent through internal hospital mail, more
distribution time will be needed between data collection steps. The follow-up steps for a Web-
based survey are the same as those associated with a paper survey (see Chapter 5: Establishing
Data Collection Procedures).
Chapter 7. Preparing and Analyzing Data, and
After closing out the data collection period, the collected survey data will need to be prepared
for analysis. As mentioned in Chapter 2, you may want to hire a vendor to conduct data entry,
data analysis, or to produce feedback reports for your hospital. If you elect to do your own data
entry, analysis, and report preparation, this chapter will guide you through the various decisions
and steps. If you choose to hire a vendor, use this chapter as a guide to establish data preparation
protocols. Data coding and cleaning will be minimized, in the event you choose to conduct a
Web-based survey, because the programming needed to make the survey form interactive and
publish it to your website will perform some of these steps for you.
You or your vendor will need to accomplish a number of tasks to prepare the survey data for
analysis. Several data files will need to be created during the data preparation process, however,
it is important to maintain the original data file that is created when survey responses are entered.
Any changes or corrections should be made to duplicate files, for two reasons:
Retaining the original file allows you to correct possible future errors made during the
data cleaning or recoding processes, and
The original file is important should you ever want to go back and determine what
changes were made to the data set or conduct other analyses or tests.
Identify Complete and Incomplete Surveys
Each survey needs to be examined for completeness, prior to entering the survey responses
into the data set. A complete survey is one in which every item or at least many items have a
response. If a few items throughout a survey form have been left blank, or if one or two entire
sections of the survey have not been answered, you may still consider the survey to be
sufficiently complete to warrant its inclusion in the data set.
At a minimum, we recommend including only those surveys in which the respondents
complete at least one whole section of the survey. If a respondent has not answered most of the
items in at least one section of the survey, you will be missing relevant data on too many items.
This will become problematic when calculating the safety culture composite scores. Therefore,
we recommend using the following criteria to identify incomplete surveys and exclude them
from your data set.
Exclude the responses from a survey form if the respondent answered:
Less than one entire section of the survey.
Fewer than half of the items throughout the entire survey (in different sections).
Every item the same (e.g., all “4”s or all “5”s). If every answer is the same, the
respondent did not give the survey their full attention. The survey includes reverse-
worded items that exercise both the high/positive and low/negative ends of the
response scale to provide consistent answers.
Code and Enter the Data
Some problematic answers may need to be coded before the data is entered into an electronic
data file. Coding involves decision making with regard to the proper way to enter ambiguous
responses. Potential coding issues are described below. These coding steps will not be necessary
if you are using a Web-based platform or scannable forms.
Illegible, Mismarked, and Double-Marked Responses
Respondents may provide responses that cannot be read easily or, in some cases, their
intended answer may be difficult to determine. For example, a respondent may write in an
answer such as 3.5, when they have been instructed to circle only one numeric response. Or, they
may circle two answers for one item. Develop coding rules for these situations and apply them
consistently. Examples of coding rules are to mark all of these types of inappropriate responses
as missing, or to use the highest response when two responses are provided (e.g., a response with
both 2 and 3 would convert to a 3). Once surveys have been coded as necessary (most surveys
will not need to be coded), the data can be entered into an electronic file using statistical software
such as those manufactured by SAS® or SPSS®, a Microsoft Excel® spreadsheet, or by entering
the data into a flat file or text file that can be easily imported into a data analysis software
If identifiers (identification numbers or codes) were used on surveys, once you close out data
collection, destroy any information linking the identifiers to individual names, because you no
longer need this information and you want to eliminate the possibility of linking responses on the
electronic file to individuals. Once the linkage information is destroyed, you may enter the
identification number in the electronic data file. If no identifiers were used on the surveys or if
you wish to include a different identifier in the data file, create an identification number for each
survey and write it on the surveys in addition to entering it into the electronic data file. This
identifier can be as simple as numbering the returned surveys consecutively, beginning with the
number one. This number will enable you to go back and check the electronic data file against
the respondents’ original answers if there are values that look like they were entered incorrectly.
Respondents are given the opportunity to provide written comments at the end of the survey.
Comments can be used to obtain direct quotes for feedback purposes. If you wish to analyze
these data further, the responses will need to be coded according to the type of comment that was
made. For example, staff may respond with positive comments about patient safety efforts in
their unit. Or, they may comment on some negative aspects of patient safety that they think need
to be addressed. You may assign code numbers to similar types of comments and later tally the
frequency of each comment type. Open-ended comments may be coded either before or after the
data has been entered electronically.
Check and Electronically Clean the Data
Once the surveys have been coded as necessary and entered electronically, it is necessary to
check and clean the data file before you begin analyzing and reporting results. The data file may
contain errors. You can check and clean the data file electronically by producing frequencies of
responses to each item and looking for out-of-range values or values that are not valid responses.
Most items in the survey require a response between 1 and 5. Check through the data file to
ensure that all responses are within the valid range (e.g., that a response of “7” has not been
entered for a question requiring a response between 1 and 5). If out-of-range values are found,
return to the original survey and determine the response that should have been tallied.
Analyze the Data and Produce Reports of the Results
Feedback reports are the final step in a survey project and are critical for synthesizing the
collected information. Ideally, feedback should be provided broadly—to hospital management,
administrators, boards of directors, hospital committees, and to hospital staff, either through their
units or through a centralized communications tool such as e-mail or newsletters. The more
broadly the results are disseminated, the more useful the information is likely to become. The
feedback also will serve to legitimize the collective effort of the respondents and their
participation in the survey. It is gratifying and important for respondents to know that something
worthwhile came out of the information they provided. Different types of feedback reports can
be prepared for each different audience, from one- or two-page executive summaries to more
complete reports that use statistics to draw conclusions or make comparisons.
Frequencies of Response
One of the simplest ways to present results is to calculate the frequency of response for each
survey item. We developed a Microsoft PowerPoint® presentation to accompany this Survey
User’s Guide, with modifiable feedback report templates that you may use to communicate
results from the Hospital Survey on Patient Safety Culture. The feedback report template groups
survey items according to the safety culture dimension each item is intended to measure. You
can easily adapt the PowerPoint template by inserting your hospital’s survey findings in the
charts to create a customized feedback report. You can also customize the report to display unit-
level data, in addition to hospital-level data. To make the results easier to view in the report, the
two lowest response categories have been combined (Strongly Disagree/Disagree and
Never/Rarely) and the two highest response categories have been combined (Strongly
Agree/Agree and Most of the time/Always). The midpoints of the scales are reported as a
separate category (Neither or Sometimes). The percentage of answers corresponding with each
of three response categories then are displayed graphically—see the example below.
Sample Graph Displaying Frequencies of Response to an Item
Survey Item % Strongly Disagree/ % Neither % Strongly Agree/
In this unit, people treat
each other with respect. 25 25 50
Because each survey item most likely will have some missing data, missing responses are
excluded from the total (or denominator) when calculating these percentages. In the example
shown, assume there were 200 total survey respondents. Twenty people did not answer this
particular item, however, so the total number of people who responded to the item was 180. The
percentage of respondents who Strongly Agreed/Agreed was 50 percent or 90/180. The
percentage of respondents who either Strongly Disagreed/Disagreed or responded “Neither” was
25 percent or 45/180. Excluding missing data from the total allows the percentages of responses
within a graph to sum to 100 (actually 99 to 101, due to the rounding of decimals to whole
There are placeholder pages in the electronic feedback report template for highlighting your
hospital’s strengths and areas needing improvement, respective of patient safety issues covered
in the survey. We define patient safety strengths as those positively worded items that about 75
percent of respondents endorsed by answering “Strongly Agree/Agree” or “Always/Most of the
time” (or those negatively worded items that about 75% of respondents disagreed with). The 75
percent cutoff is somewhat arbitrary, and your hospital may choose to report strengths using a
higher or lower cutoff percentage. Similarly, areas needing improvement are identified as those
items that 50 percent or fewer respondents did not answer positively (they either answered
negatively or “Neither” to positively worded items, or they agreed with negatively worded
items). The cutoff percentage for areas needing improvement is lower, because if half of the
respondents are not expressing positive opinions with regard to a safety issue, there probably is
room for improvement.
It also is important to present frequency information about the background characteristics of
all the respondents as a whole—the units to which they belong, how long they have worked in
the hospital or their unit, their staff position, etc. This information helps others to better
understand whose opinions are being represented in the data. Be careful not to report frequencies
in small categories (e.g., the number of hospital presidents who responded), where it may be
possible to determine which employees fall into those categories.
Composite Frequencies of Response
The survey items can be grouped into dimensions of safety culture, and so it can be useful to
calculate one overall frequency for each dimension. One way of doing this is to create a
composite frequency of the total percentage of positive responses for each safety culture
dimension. Composites can be computed for individual units or sections of a hospital, or for the
hospital as a whole. For example, a composite frequency of 50 percent on Overall Perceptions of
Safety would indicate that 50 percent of the responses reflected positive opinions regarding the
overall safety in the unit or hospital.
To create an overall composite frequency on a safety culture dimension:
Step 1. Determine which items are related to the dimension in which you are interested,
and which items related to that are reverse worded (negatively worded). Items are
grouped by dimension in Appendix B, which also identifies the items that are
reverse worded. There are three or four items per dimension.
Step 2. Count the number of positive responses to each item in the dimension—“Strongly
Agree/Agree” or “Most of the time/Always” are positive responses for positively
worded items. For reverse worded items, disagreement indicates a positive
response, so count the number of “Strongly Disagree/Disagree” or
Step 3. Count the total number of responses for the items in the dimension (this excludes
Step 4. Divide the number of positive responses to the items (answer from step 2) by the
total number of responses (answer from step 3).
Number of positive responses to the items in the dimension
Total number of responses to the items (positive, neutral, and negative)
in the dimension
The resulting number is the percentage of positive responses for that particular dimension.
Here is an example of computing a composite frequency percentage for the Overall Perceptions
of Safety dimension:
There are four items in this dimension—two are positively worded (A15) and (A18), and
two are negatively worded (A10) and (A17). Keep in mind that disagreeing with the
negatively worded items indicates a positive perception of safety.
To count the total number of positive responses, complete Table 2:
Table 2. Example of composite frequency matrix
1. For positively 2. For reverse 3. Total 4. Total
Items in worded items, worded items, count number of number of
“Overall Perceptions count the number of the number of “positive” responses
of Safety” “Strongly Agree” or “Strongly Disagree” or responses to the item
“Agree” responses. “Disagree” responses. (excluding
Item A15-positively worded
“Patient safety is never 120 NA* 120 260
sacrificed to get more work
Item A18-positively worded
“Our procedures and systems 130 NA* 130 250
are good at preventing errors
Item A10-reverse worded
“It is just by chance that more NA* 110 110 240
serious mistakes don’t
happen around here.”
Item A17-reverse worded
“We have patient safety NA* 140 140 250
problems in this unit.”
TOTALS: 500 1,000
The composite frequency percentage is calculated by dividing the total number of positive
responses on all four questions (numerator) by the total number of responses to all four questions
excluding missing responses (denominator). There were 500 positive responses, divided by 1,000
total responses, which results in a composite of 50 percent positive responses for Overall
Perceptions of Safety.
While there are many other ways to analyze survey data, we have presented only basic
options here. If you are working with an outside vendor, the vendor may suggest additional
analyses that you may find useful.
Dillman, DA. Mail and internet surveys. New York, NY: John Wiley Company, 2000.
Fink A, Kosecoff J. How to conduct surveys: a step-by-step guide. Thousand Oaks, CA: Sage
Groves, R.M. Survey nonresponse. New York: Wiley, 2002.
Salant P, Dillman DA. How to conduct your own survey. New York, NY: John Wiley Company,
Details on the development, pilot testing, and psychometric properties of the Hospital Survey
on Patient Safety Culture are contained in the following technical report:
Sorra, JS and Nieva, VF. Psychometric analysis of the Hospital Survey on Patient Safety.
(Prepared by Westat, under contract to BearingPoint, and delivered to the Agency for Healthcare
Research and Quality [AHRQ], under Contract No. 29-96-0004.)
The Hospital Survey form and the complete set of Survey Feedback Report
templates are available as a free, downloadable Microsoft PowerPoint®
presentation, at www.ahrq.gov/qual/hospculture/
Safety Culture Dimensions and Reliabilities
I. Background Variables
A. What is your primary work area or unit in this hospital?
H1. How long have you worked in this hospital?
H2. How long have you worked in your current hospital work area/unit?
H3. Typically, how many hours per week do you work in this hospital?
H4. What is your staff position in this hospital?
H5. In your staff position, do you typically have direct interaction or contact with
H6. How long have you worked in your current specialty or profession?
II. Outcome Measures
A. Frequency of Event Reporting
D1. When a mistake is made, but is caught and corrected before affecting the patient,
how often is this reported?
D2. When a mistake is made, but has no potential to harm the patient, how often is this
D3. When a mistake is made that could harm the patient, but does not, how often is this
Reliability of this dimension—Cronbach’s alpha (3 items) = .84
B. Overall Perceptions of Safety
A15. Patient safety is never sacrificed to get more work done.
A18. Our procedures and systems are good at preventing errors from happening.
A10r. It is just by chance that more serious mistakes don’t happen around here.
A17r. We have patient safety problems in this unit. (reverse worded)
Reliability of this dimension—Cronbach’s alpha (4 items) = .74
C. Patient Safety Grade
E1. Please give your work area/unit in this hospital an overall grade on patient
Single-item measure—grades A through E as response categories.
D. Number of Events Reported
G1. In the past 12 months, how many event reports have you filled out and
Single-item measure—numeric response categories.
III. Safety Culture Dimensions (Unit level)
A. Supervisor/manager expectations & actions promoting safety1
B1. My supervisor/manager says a good word when he/she sees a job done
according to established patient safety procedures.
B2. My supervisor/manager seriously considers staff suggestions for improving
B3r. Whenever pressure builds up, my supervisor/manager wants us to work faster,
even if it means taking shortcuts. (reverse worded)
B4r. My supervisor/manager overlooks patient safety problems that happen over
and over. (reverse worded)
Reliability of this dimension—Cronbach’s alpha (4 items) = .75
B. Organizational Learning—Continuous improvement
A6. We are actively doing things to improve patient safety.
A9. Mistakes have led to positive changes here.
A13. After we make changes to improve patient safety, we evaluate their
Reliability of this dimension—Cronbach’s alpha (3 items) = .76
C. Teamwork Within Hospital Units
A1. People support one another in this unit.
A3. When a lot of work needs to be done quickly, we work together as a team to
get the work done.
A4. In this unit, people treat each other with respect.
A11. When one area in this unit gets really busy, others help out.
Reliability of this dimension—Cronbach’s alpha (4 items) = .83
D. Communication Openness
C2. Staff will freely speak up if they see something that may negatively affect
C4. Staff feel free to question the decisions or actions of those with more
C6r. Staff are afraid to ask questions when something does not seem right. (reverse
Reliability of this dimension—Cronbach’s alpha (3 items) = .72
Adapted from Zohar (2000). A group-level model of safety climate: Testing the effect of group climate on microaccidents in
manufacturing jobs. Journal of Applied Psychology, (85) 4, 587-596.
E. Feedback and Communication About Error
C1. We are given feedback about changes put into place based on event reports.
C3. We are informed about errors that happen in this unit.
C5. In this unit, we discuss ways to prevent errors from happening again.
Reliability of this dimension—Cronbach’s alpha (3 items) = .78
F. Nonpunitive Response To Error
A8r. Staff feel like their mistakes are held against them. (reverse worded)
A12r. When an event is reported, it feels like the person is being written up, not the
problem. (reverse worded)
A16r. Staff worry that mistakes they make are kept in their personnel file. (reverse
Reliability of this dimension—Cronbach’s alpha (3 items) = .79
A2. We have enough staff to handle the workload.
A5r. Staff in this unit work longer hours than is best for patient care. (reverse
A7r. We use more agency/temporary staff than is best for patient care. (reverse
A14r. We work in “crisis mode,” trying to do too much, too quickly. (reverse
Reliability of this dimension—Cronbach’s alpha (4 items) = .63
H. Hospital Management Support for Patient Safety
F1. Hospital management provides a work climate that promotes patient safety.
F8. The actions of hospital management show that patient safety is a top priority.
F9r. Hospital management seems interested in patient safety only after an adverse
event happens. (reverse worded)
Reliability of this dimension—Cronbach’s alpha (3 items) = .83
IV. Safety Culture Dimensions (Hospital-wide)
A. Teamwork Across Hospital Units
F4. There is good cooperation among hospital units that need to work together.
F10. Hospital units work well together to provide the best care for patients.
F2r. Hospital units do not coordinate well with each other. (reverse worded)
F6r. It is often unpleasant to work with staff from other hospital units. (reverse
Reliability of this dimension—Cronbach’s alpha (4 items) = .80
B. Hospital Handoffs & Transitions
F3r. Things “fall between the cracks” when transferring patients from one unit to
another. (reverse worded)
F5r. Important patient care information is often lost during shift changes. (reverse
F7r. Problems often occur in the exchange of information across hospital units.
F11r. Shift changes are problematic for patients in this hospital. (reverse worded)
Reliability of this dimension—Cronbach’s alpha (4 items) = .80
Sample Page from Survey Feedback Report Templates
The complete set of Survey Feedback Report templates and the Hospital Survey
form are available free of charge, as a downloadable Microsoft PowerPoint®
presentation, at www.ahrq.gov/qual/hospculture/
Appendix A. Pilot Study for the
Hospital Survey on Patient Safety Culture:
A Summary of Reliability and Validity Findings
Agency for Healthcare Research and Quality
U.S. Department of Health and Human Services
540 Gaither Road
Rockville, MD 20850
Contract No. 290-96-0004
Westat, Rockville, MD
Joann Sorra, Ph.D.
Veronica Nieva, Ph.D.
This survey development effort was sponsored by the Medical Errors Workgroup of the Quality
Interagency Coordination Task Force (QuIC), and was funded by the Agency for Healthcare Research
and Quality (AHRQ contract no. 290-96-0004). Westat conducted this work under a subcontract with
BearingPoint. The authors wish to thank Matthew Mishkind, Ph.D., a former Westat staff member, who
contributed to the development of the pilot instrument and conducted cognitive testing; Rose Windle for
survey administration; and Theresa Famolaro for assisting with data cleaning and analysis. We are
grateful to Dorothy B. “Vi” Naylor, MN, of the Georgia Hospital Association; and Tracy Scott, Ph.D.,
and Linda Schuessler, MS, of the Emory Center on Health Outcomes and Quality, Rollins School of
Public Health, for sharing part of the data they collected in 10 Georgia hospitals using the pilot survey so
we could include their data in this psychometric analysis. We also wish to thank a Risk Manager at a
Veterans Health Administration (VHA) Hospital for administering the pilot survey to staff at a VHA
hospital and sharing the data with Westat. In addition, we thank Eric Campbell, Ph.D., Barrett Kitch,
M.D., M.P.H., and Minah Kim, Ph.D., of the Institute for Health Policy at Massachusetts General
Hospital in Boston for their suggestions to improve the pilot survey and for recruiting four hospitals to
participate in the pilot. Finally, we wish to thank our AHRQ project officer, James Battles, Ph.D., for his
guidance and assistance.
Introduction and Background
Sponsored by the Medical Errors Workgroup of the Quality Interagency Coordination Task
Force (QuIC) and funded by the Agency for Healthcare Research and Quality (AHRQ contract
no. 290-96-0004), this summary describes the development of the Hospital Survey on Patient
Safety Culture and presents the results of a psychometric analysis designed to determine the
reliability and validity of the survey. The goal of this project was to develop a reliable, public-
use safety culture instrument that hospitals could administer on their own to assess patient safety
culture from the perspective of their employees and staff.
This summary presents survey pilot data gathered from 1,437 hospital staff in 21 United
States hospitals. The goal of the psychometric analysis was a concise and refined survey
instrument, based on an earlier draft instrument and revised through the identification of
conceptually meaningful, independent, and reliable safety culture dimensions, with three to five
survey items measuring each dimension. The psychometric analysis consisted of a number of
analytic techniques, including: item analysis, content analysis, exploratory and confirmatory
factor analyses, reliability analysis, composite score construction, correlational analysis, and
analysis of variance.
The researchers conducted a number of preliminary activities to inform the development of
the Hospital Survey on Patient Safety Culture. First, a review of the literature was conducted in
areas related to safety management and accidents in the nuclear and manufacturing industries,
employee health and safety, organizational climate and culture, safety climate and culture, and
medical error and event reporting. The researchers also gathered examples of existing safety
climate and culture instruments, including published and unpublished instruments and those
available across the Internet.
Psychometric analyses also were conducted on two existing health care safety culture
surveys: one developed and administered by Westat for the Medical Event Reporting System for
Transfusion Medicine (MERS-TM) and another developed and administered by the Veterans
Health Administration (VHA). The 100-item MERS-TM safety culture survey data set consisted
of 945 staff from 53 hospital transfusion services across the United States and Canada. The 120-
item VHA Patient Safety Questionnaire (FY 2000) data set consisted of 6,161 staff from 160
VHA hospitals nationwide. The data sets were analyzed independently, and the psychometric
analyses were written as technical reports delivered to AHRQ (Burr, Sorra, Nieva & Famolaro,
2002; Sorra & Nieva, 2002). The results from these technical reports had a significant influence
on the safety culture dimensions and types of items that were included in the pilot version of the
Hospital Survey on Patient Safety Culture.
Key dimensions of hospital safety culture were identified for inclusion in the survey, based
on the literature review, examination of existing published and unpublished safety culture
instruments, and the psychometric analyses from the MERS-TM and VHA safety culture
surveys. Items then were developed to measure those dimensions. The items were written with
the goal of obtaining a staff-level perspective on patient safety in hospital settings. Respondents
were asked to think about their own units because they would know the culture of their unit
better than the hospital as a whole. The investigators, however, did include a short section at the
end of the survey that focused specifically on hospital-wide safety issues.
Cognitive Testing and External Review of the Survey
Cognitive testing is a developmental procedure in which individuals similar to the targeted
respondents are asked to complete a questionnaire and provide comments or “think aloud” while
answering the questions. Frequently, the interviewer will ask respondents questions as they work
through the questionnaire to better assess the respondents’ comprehension and interpretation of
the terms used and the items they are being asked to consider, as a means of determining how
they arrive at their answers, and to identify problems with the items or instructions. Cognitive
interviews were conducted by telephone with diverse hospital staff, including a nurse manager,
risk manager, department clerk, dietician, food services employee, respiratory therapist,
pharmacist, and pathologist, as well as nurses, residents and physicians from different U.S.
hospitals. The investigators also solicited reviews of the draft instrument from other researchers
familiar with safety culture measurement, along with input from a hospital system administrator,
a group of physicians, and the Joint Commission on Accreditation of Healthcare Organizations
(JCAHO). Changes were made to the survey dimensions and items following cognitive testing
and the external survey review, resulting in a revised pilot survey comprised of 79 items
measuring 14 dimensions of safety culture.
Draft Pilot Survey
The draft pilot survey contained items that, for the most part, used 5-point Likert response
scales of agreement (“Strongly disagree” to “Strongly agree”) or frequency (“Never” to
“Always”). The items in the draft pilot survey included two single-item outcome measures used
as validity checks and 14 multiple-item dimensions or scales of patient safety—two overall
patient safety outcome scales designed to assess validity and 12 safety culture dimensions.
The pilot survey administration sample included 21 hospitals across six U.S. states. The
investigators collected their own data in 10 hospitals. Additional data from one Veterans Health
Administration (VHA) hospital and 10 Georgia hospitals were forwarded to the researchers by
the VHA and the Emory Center on Health Outcomes and Quality, in close cooperation with the
Georgia Hospital Association. The sample of hospitals was selected to vary by geographic
region, teaching status, and hospital size (Table 1), to ensure that the pilot survey administration
contained a diverse sample. In addition, two facilities were for-profit hospitals, one facility was a
veterans hospital, and one was a geriatric hospital.
Table 1. Teaching status and bed size of the 21 pilot hospitals
Hospital Type Number of Beds
Small Medium/Large Large
(< 300 beds) (301 – 500 beds) (> 500 beds)
Teaching 5 3 6
Nonteaching 5 1 1
For the 10 hospitals in which the investigators collected data, packets were delivered
containing a cover letter, the survey, a postage-paid envelope for returning completed surveys
directly to the investigators, and a reply postcard. Contact persons at each hospital distributed the
survey packets through the internal hospital mail system (with the exception of one hospital in
which the surveys were mailed to employees’ homes). The surveys were mailed to the homes of
hospital employees included in the sample for the remaining 11 hospitals.
Data collection involved the following distribution steps to maximize response rates: a first
survey, first reminder postcard, second survey, and a second reminder postcard. For six hospitals,
a prenotification letter was sent on hospital letterhead, signed by the hospital president, COO,
CEO, or equivalent.
Sample and Response Statistics
Criteria for sample selection varied somewhat from one hospital group to another. Six
hospitals each selected a sample of about 100 staff, and purposive sampling was used (rather
than random sampling) to ensure that an adequate variety of job classifications and hospital units
would be represented. The selected hospital staff included those with direct patient contact, as
well as those without patient contact. The researchers also recommended the inclusion of only
those physicians who spend the majority of their work time in the hospital (e.g., emergency
department physicians, radiologists, hospitalists, pathologists, etc.).
Only nurses and pharmacists were selected in four other hospitals, and these staff were
randomly chosen. All staff were included in another hospital (a census). Staff in another group of
10 hospitals were selected from four specific departments—general medicine, general surgery,
intensive/critical care, and ancillary services. A random sample of 100 staff from each unit was
selected. For smaller hospitals in this group, all staff from these departments were selected (and
may not have reached 100 staff per department).
A total of 4,983 surveys were administered across the 21 hospitals, with 1,437 responses
received at the time the data set was compiled. This resulted in a 29% overall response rate.
Response statistics are summarized below.
Distribution through internal hospital mail systems (11 hospitals):
45% response rate (711 responses out of 1,575 surveys)
Note: One site in this group mailed the surveys to the employees’ home addresses.
Distribution to employees’ homes through the U.S. Postal Service (10 hospitals):
21% response rate (726 responses out of 3,408 surveys)
Average response rate within each hospital: 37%
Average number of respondents per hospital: 68
In anticipation of confidentiality concerns and the privacy of each individual’s responses, the
survey included few demographic questions. Most respondents were female (81%) and most
(84%) typically had direct interaction or contact with patients. The average age of the
respondents was 43 years old. They had worked an average of 10 years in their hospital, and the
average tenure in their specific hospital unit or work area was 7 years. The largest percentage of
respondents worked in intensive care units (18%), followed by surgery (15%), other (14%), and
medicine (nonsurgical) (12%).
Analyses and Results
Several analyses were conducted on the responses to the items in the Hospital Survey on
Patient Safety Culture. The goal of the combined analytic efforts was a shorter, revised survey
instrument, based on conceptually meaningful, independent, and reliable safety culture
dimensions, with three to five items measuring each dimension. Individual item analysis first
was conducted, in an effort to identify and eliminate those items that were highly skewed or had
high amounts of missing data.
Exploratory and Confirmatory Factor Analyses
Since it is possible that safety culture could simply be a single, unidimensional concept, an
exploratory factor analysis was conducted initially to explore the dimensionality of the survey
data. Principal components extraction was used, along with varimax rotation, to maximize the
independence of the factors. The exploratory factor analysis results confirmed the existence of
multiple factors or dimensions and provided evidence that suggested many of the a priori item
groupings did, in fact, fall into distinct factors. The analysis results revealed 14 factors with
eigenvalues greater than or equal to 1.0. The total variance explained by the 14 components or
factors is 64.5 percent, with almost all items loading highly on only one factor (with a factor
loading greater than or equal to .40).
To further examine the dimensionality of the survey, and taking into consideration the a
priori safety culture dimensions, a confirmatory factor analysis (CFA) then was performed. CFA
is used when an a priori factor structure is posited, because CFA tests the fit of a model that
proposes a specific number of factors and specifies the items that measure or load onto each of
the factors. Since the Hospital Survey on Patient Safety Culture was developed by first
identifying safety culture dimensions and then creating items to measure those dimensions, an a
priori factor structure was posited and a CFA was conducted to determine how well the posited
structure conforms to the data. An initial confirmatory factor model then was created based on
the exploratory factor analysis and a content analysis of the safety culture dimensions and items.
The CFA work was done using the SAS Institute’s software for calculating covariance analysis
of linear structural equations (CALIS), in conjunction with the maximum likelihood method of
After analyzing several confirmatory factor models (and dropping items each time to
eliminate problematic issues), the investigators arrived at a final confirmatory factor model with
a good fit to the data. This was verified by a number of different model fit indices. The final
confirmatory factor model features 12 dimensions—two outcome dimensions and 10 safety
culture dimensions—with three or four items measuring each dimension, for a total of 42 items.
Overall model fit indices were examined closely. These model fit statistics—the comparative
fit index (CFI), the goodness-of-fit index (GFI), the adjusted GFI (AGFI), the normalized fit
index, and the non-normalized fit index (NNFI)—each met the criterion for good conformance
with indices at .90 or above. The closer each of these indices is to 1.00, the better the fit of the
model to the data. The root-mean-square error of approximation (RMSEA), a measure of the
discrepancy per degree of freedom for the model or the degree of unexplained variance, was .04.
An RMSEA of .05 or lower indicates a good model fit because the closer it is to zero, the better
the fit of the model to the data.
Internal consistency reliabilities were examined for each of the 12 final safety culture
dimensions identified in the confirmatory factor model. Since items were worded in both positive
and negative directions, negatively worded items first were reverse coded so that a higher score
would indicate a more positive response in all cases. Each of the 12 safety culture dimensions
that make up the survey was found to have an acceptable reliability (defined as a Cronbach’s
alpha greater than or equal to .60), with reliability coefficients ranging from .63 to .84.
Validity Analysis: Composite Scores and Intercorrelations
Composite scores were created for the 12 safety culture dimensions by obtaining the mean of
the responses to items in each dimension (after any necessary reverse coding). A composite score
was calculated for each respondent, relative to each of the 12 safety culture dimensions. Since all
the items used 5-point response scales, composite scores ranged from 1.0 to 5.0 (scored so that 1
= a low score and 5 = a high score). After calculating the composite scores, the safety culture
dimensions then were correlated with one another.
The construct validity of each safety culture dimension would be reflected in composite
scores moderately related to one another, indicated by correlations between .20 to .40.
Correlations of less than .20 would indicate that two safety culture dimensions were related
weakly. Exceptionally high correlations (.85 or above) would likely indicate that the dimensions
measure essentially the same concept, and these dimensions possibly could be combined and
some items eliminated. Correlations between the safety culture composites or scales ranged from
.23 (between Nonpunitive Response to Error and Staffing or Frequency of Event Reporting) to
.60 (between Hospital Management Support for Patient Safety and Overall Perceptions of
Safety). These intercorrelations all fall within the expected moderate to high range. That none
were exceptionally high indicates that no two safety culture dimensions appeared to measure the
Correlations were calculated for the 12 safety culture dimensions and the four outcome
variables (two of the safety culture dimensions are considered outcome variables—Overall
Perceptions of Safety and Patient Safety Grade). The highest intercorrelation was .66
(p < .001), calculated for the outcome measures of Overall Perceptions of Safety and Patient
Safety Grade. This high correlation provides evidence of the Overall Perceptions scale validity,
in that has a strong relation to the respondents’ single-item assessment of their unit’s grade on
patient safety (A = Excellent, B = Very Good, C = Acceptable, D = Poor, and E = Failing). The
second highest intercorrelation was between Overall Perceptions of Safety and Hospital
Management Support for Patient Safety (r = .60, p < .001). This finding points to the important
role that hospital management plays in the advancement of patient safety issues. Staff gave their
units higher patient safety marks when they felt that hospital management actively supported
The highest correlation associated with the Frequency of Event Reporting dimension was
with Feedback and Communication About Error (r = .48, p < .001). Surprisingly, Nonpunitive
Response to Error had the lowest relationship with the Frequency of Event Reporting (r = .23, p
< .001). Hospital staff indicated that events are reported more frequently when there is an open
line of communication involving errors, and when they are given feedback regarding changes
implemented as a result of event reports. These correlations suggest that increased event
reporting is more likely to be achieved through the advancement of communication and
feedback—than through the creation of a nonpunitive culture.
Finally, all but two of the correlations between the Number of Events Reported within the
last year and the safety culture dimensions were nonsignificant and very low—almost zero in
most cases. One explanation for the lack of relationships with this one-item outcome variable is
that more than half of all respondents reported no events in the last 12 months. Forty-five percent
reported 10 or fewer events. The lack of variability and the highly skewed nature of the reported
event numbers resulted in an absence of linear relationships with the other safety culture
dimensions. For now, the best use for this one-item measure of reported events is as a change
indicator, to see if staff report more events over time.
Analysis of Variance: Differences Across Hospitals
One final analysis—a one-way analysis of variance (ANOVA)—was conducted on each of
the 12 safety culture dimensions, and on the two single-item outcome measures (Number of
Events Reported and Patient Safety Grade), to determine the extent to which composite scores on
these safety culture scales are differentiated across hospitals. An ANOVA by hospitals examines
whether there is greater response variability on the safety culture dimensions between hospitals
compared to within hospitals. In other words, it generally addresses the issue of whether
hospitals differ on each of the safety culture dimensions. All ANOVAs on each of the 12
composites had statistical significance, supporting the hypothesis that hospitals have
differentiated scores on each dimension—that different hospitals have different composite
scores on the safety culture outcome variables and dimensions. Since hospitals have different
actual levels of patient safety, some should score high and some should score low on the safety
culture dimensions—which is what the results indicate and what good scales would reflect.
Westat was tasked with developing an employee survey to assess the culture of patient safety
in hospital settings. The development of the survey was based on a literature review, examination
of existing published and unpublished safety culture instruments, and psychometric analyses
conducted on two existing safety culture surveys.
The draft survey was piloted in 21 hospitals, and the pilot data were analyzed to refine the
instrument and determine its psychometric properties. In the process of refining the instrument,
26 of the originally piloted items were dropped. Based on the psychometric analyses, the final
Hospital Survey on Patient Safety Culture includes 12 dimensions and 42 items, plus additional
background questions. All of the psychometric analyses—from the CFA results and reliabilities
to the intercorrelations among the dimensions and the analysis of variance results—provide solid
evidence supporting the final dimensions and items that were retained.
All dimensions were shown to have acceptable levels of reliability (defined as Cronbach’s
alpha equal to or greater than .60). The safety culture dimensions included in the final survey are
shown below (reliabilities are in parentheses):
Two outcome dimensions (multiple item scales):
1. Overall perceptions of safety (.74)
2. Frequency of event reporting (.84)
Ten safety culture dimensions (multiple item scales):
1. Supervisor/manager expectations and actions promoting patient safety (.75)
2. Organizational learning—Continuous improvement (.76)
3. Teamwork within units (.83)
4. Communication openness (.72)
5. Feedback and communication about error (.78)
6. Nonpunitive response to error (.79)
7. Staffing (.63)
8. Hospital management support for patient safety (.83)
9. Teamwork across hospital units (.80)
10. Hospital handoffs and transitions (.80)
Burr M, Sorra J, Nieva VF, et al. Analysis of the Veterans Administration (VA) National Center
for Patient Safety (NCPS) FY 2000 Patient Safety Questionnaire. Technical report. Westat:
Rockville, MD; 2002.
McKnight S, Lee C. Patient safety attitudes. Paper presented at the Summit on Effective
Practices to Improve Patient Safety, Washington, DC; September 5-7, 2001.
Nieva VF, Sorra J. Safety culture assessment: A tool for improving patient safety in health care
organizations. Qual Saf Healthcare 2003;12(Suppl 2):17-23.
Sorra J, Nieva VF. Psychometric analysis of the MERS-TM Hospital Transfusion Service Safety
Culture Survey. Technical report. Westat: Rockville, MD; 2002.
Sorra JS, Nieva VF, Schreiber G, et al. MERS-TM Hospital Transfusion Service Safety Culture
Survey. Unpublished survey developed by Westat under contract to Columbia University,
supported by a grant from the National Heart, Lung, and Blood Institute (NHLBI # R01
Appendix B. Safety Culture Assessment: A Tool for
Improving Patient Safety in Healthcare Organizations
Reprinted with the permission of BMJ Publishing Group, London (UK) from:
Quality and Safe Health Care 2003; 12(Suppl II):ii17-ii23
The text of this article also is available in electronic form at: