Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

advantages and disadvantages of telephone by 12play

VIEWS: 10,506 PAGES: 5

									 Telephone and Mail Surveys: Advantages and Disadvantages of Each
                                 Dan Zahs and Reg Baker
                                  Market Strategies, Inc.

                                       May 29, 2007

Selecting the mode of administration for a survey requires that one evaluate a number of
factors and understand clearly the tradeoffs involved in choosing one mode over another.
While telephone and mail research share some similar qualities, there are major
differences. We group these differences into four main categories: (1) sample frame; (2)
non-response bias; (3) measurement error; and (4) time and money.

The Sample Frame

A sample frame is essentially a list used to select the sample of persons to be interviewed.
A high quality frame is one that contains a complete or nearly complete list of the target
population. If the frame is sufficiently complete and accurate, a random sample selected
from this frame will be non-biased and representative of the target population.

One major reason for the adoption of telephone as the gold standard for much of
commercial research over the last 25 years has been the quality of the sample frame. The
standard frame for RDD telephone surveys uses information from the telephone
companies about the assignment status of groups of telephone numbers (known as groups
and blocks). Until recently, this frame was considered largely complete and accurate,
with a relatively small number of missing non-telephone households. More recently, the
widespread adoption of cell phones has threatened the integrity of the RDD frame. For
various reasons, cell phones are not part of this standard frame. Until recently, most
people with a cell phone also had a traditional telephone or landline. However, the
proportion households having a cell phone but no landline has been growing rapidly and
is now around 13 percent (Blumberg and Luke (2007)). Using the standard RDD frame,
these cell-only households have no chance of being selected into any sample. When cases
are excluded from the possibility of being selected, bias might be introduced into the
results. This is known as coverage error.

It is possible to include the cell-only population into the sample selection process through
the use of a dual frame approach, but this can add significant cost and poses some
difficult weighting challenges. Fortunately, recent research suggests that at this point the
exclusion of cell-only households does not lead to substantial levels of bias, that is, there
are few significant differences in overall estimates produced from samples with and
without cell-only households (See, for example, Keeter (2006) and Brick et al. (2006)).
That said, cell-only households are disproportionately comprised of 18-34 year olds and
so there is some demographic bias in samples that exclude those households. Research
has found that persons in cell-only households differ from landline households on some
important health measures such as smoking and binge drinking (Blumberg and Luke

(2007)). Studies with a special focus on that age group can have substantial bias if cell-
only households are not included. Thus far, this group is not so large as to significantly
affect estimates for the overall population.

The availability of sample frames for mail studies is also an unfolding story. Historically,
mail studies of the general population have tended to use a so-called “listed” frame
compiled from various sources including the directories of people listed in the white
pages of telephone books. These listed frames are generally incomplete and have high
levels of coverage error. For example, a match of an RDD sample to a list of people in a
telephone book typically yields about a 50 percent match rate. More recently, a new
frame option has emerged and is still being evaluated by survey methodologists. This
frame is maintained by the U.S. Postal Service and is called the Delivery Sequence File
(DSF). As the name suggests, this is the database the USPS uses for their deliveries. This
frame appears to be much more comprehensive and complete than previous mail frames,
although early studies suggest that there still may be significant coverage error,
specifically of non-urban and lower income areas (Link and Mokdad (2005)).

Non-Response Bias

The potential for bias in a survey results is not just a function of coverage (i.e. the quality
of the frame). Non-response also is a major factor. One hundred percent response may
be an admirable goal, but it is almost never achieved. Large proportions of any selected
sample, regardless of mode, typically do not respond. This non-response tends to be
systematic rather than random, and therefore bias is introduced. Put in simple terms, the
non-response bias question is: Would the estimates from my survey be different if
everyone had responded? Put another way: Are those who did not respond different from
those who did in some systematic way that biases my results?

Historically, telephone surveys have yielded significantly higher response rates (and
therefore lower non-response) than mail, although the achieved response rate in any
mode is a function of the implementation techniques used (e.g., advance letters,
incentives, refusal conversion, etc.) Telephone surveys typically under represent people
who are difficult to catch at home (e.g., younger, single, or non-family households) while
mail surveys have tended to under represent lower SES households. In both instances,
post stratification weights are used to bring the achieved sample back in line with
population demographics.

Over about the last decade we have seen significant declines in response rates to all
modes. The impact on the validity of telephone research has been particularly studied
because of its widespread use. Recent studies have demonstrated that response rate is not
as important a measure of survey data quality as was once thought (See, for example,
Holbrook et al. (in press) and Keeter et al. (2006)). Surveys with low response rates (as
low as four percent in one comparison) can yield results that are statistically equivalent to
those from surveys with much higher response rates, although a high response rate is
usually better than a low one. This is interpreted as a validation of the power or
probability sampling from good quality sample frames.

Measurement Error

Measurement error can have any number of sources including the interviewer, the
questionnaire, the mode of administration, and the respondent. Interviewers may be
poorly trained and administer the questionnaire incorrectly. The questionnaire may be
flawed in some important way that leads to comprehension problems or
misunderstanding. The presence or absence of an interviewer may influence how a
respondent answers. The respondent may lack the cognitive ability to comprehend and
answer the questions or simply lack the motivation to answer carefully.

Generally speaking, the advantage of well-trained interviewers has led researchers to
prefer telephone over mail. Interviewers ensure that the correct target respondent within
a household completes the survey; administer the survey so all questions are asked; help
respondents understand questions or concepts that might be ambiguous or difficult; and
keep the respondent engaged over the duration of the survey. They also can probe
respondents to get fuller and more accurate answers in open ended questions. One classic
statistic here is item non-response, that is, the proportion of missing data in individual
questions (skipped questions, don’t know or refusal to answer). Telephone studies
typically have significantly lower item non-response than self-administered modes such
as mail and Web.

The combination of well-trained interviewers and computer-assisted interviewing
technologies (such as CATI) have made it possible to design and administer very
complex questionnaires that tailor the interview to each respondent, prevent routing
errors, reduce data recording errors, maintain consistency in respondent answers, and use
design techniques such as randomization and rotation to reduce order and context effects.

Telephone surveys also lend themselves to interviewing in multiple languages.

Mail surveys by virtue of their self-administered, paper-and-pencil format are much more
limited. Questionnaires are generally shorter, simpler, have few skips, and must be
crafted to be as clear and unambiguous as possible. The researcher has less control over
who completes a mail survey and the order in which questions are answered. Recording
errors are much more common than with interviewer administration. Commonly used
techniques such as randomization, rotation, and multiple language interviewing are
extremely difficult to implement.

Despite their many benefits, interviewers can also be a source of error if they are not
well-trained and monitored. Further, the presence of the interviewer can sometimes lead
to under reporting of socially sensitive behaviors (e.g., drug use or binge drinking), a
phenomenon know as social desirability bias. The principal advantage of mail surveys in
terms of measurement error is the potential reduction in social desirability bias.

Time and Money

Arguably the simplest comparison of the two modes focuses on cost and length of field
period. Mail surveys are almost always less expensive than telephone because the cost of

mailing and data conversion (keying or scanning) are significantly less than the cost of
interviewer labor. Mail surveys typically require an incentive to achieve any sort of
reasonable response rate, and depending on the magnitude of that incentive mail surveys
can sometimes approach the cost of telephone surveys where incentives are often not
offered except where very high response rates are necessary.

On the other hand, mail surveys generally require much longer field periods, and the
researcher’s ability to either predict or control this key condition is very limited when
compared to telephone.


As we hope the foregoing makes clear, telephone surveys have a broad set of advantages
over mail. Further, these advantages are generally recognized by researchers and
comprise the main reason why RDD telephone surveys have been consistently viewed as
the methodology of choice for high quality, general population surveys. Generally
speaking, mail surveys have tended to be viewed as a compromise that sacrifices data
quality in return for lower costs.

At present, however, there are dynamics at work relative to sample frames that could
undermine the dominant role that telephone surveys have traditionally enjoyed. The
rapid rise of cell-only households has begun to introduce coverage problems into the
traditional RDD sample frame. The most likely remedies will increase the cost of
telephone research significantly. At the same time, the emergence of a new and possibly
more complete mail sample frame in the form of the DFS may create a significant
coverage advantage for mail surveys.

Issues of coverage area and cost aside, the combination of interviewer administration and
modern CATI technology continue to present and extremely powerful data quality
argument for telephone research.


Blumberg, S.J., and Luke, J.V. 2007. “Wireless Substitution: Early Release Estiamtes
Based on Data from the National Health Interview Survey, July – December 2006,”
National Center for Health Statistics.

Brick, M. J., Dipko, S., Presser, S., Tucker, C., andYuan, Y. 2006. ”Nonresponse Bias in
a Dual Frame Sample of Cell and Landline Numbers,” Public Opinion Quarterly, 70:

Holbrook, A. L., Krosnick, J.A., and Pfent, A.M. In Press. “Response Rates in Surveys
by the News Media and Government Contractor Survey Research Firms,” in J.

Lepkowski, B. Harris-Kojetin, P.J. Lavrakas, C. Tucker, E. de Leeuw, M. Link, M. Brick,
L. Japec, and R. Sangster (eds.), Telephone Survey Methodology. New York: Wiley.

Keeter, S. 2006. “The Impact of Cell Phone Noncoverage Bias on Polling in the 2004
Presidential Election,” Public Opinion Quarterly, 70: 88-98.

Keeter, S., Kennedy, C., Dimock, M., Best, J., and Craighill, P. 2006. “Gauging the
Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey,”
Public Opinion Quarterly, 70: 759-779.

Link, M.W., Mokdad, A.H. 2005. “Use of Alternative Modes for Health Surveillance
surveys: Results from a Web/Mail/Telephone Experiment,” Epidemiology, 16: 701-704.


To top