Docstoc

THE IMPACTS OF CALLER ID AND A RELAXED REFUSAL

Document Sample
THE IMPACTS OF CALLER ID AND A RELAXED REFUSAL Powered By Docstoc
					THE IMPACTS OF CALLER ID ON RESPONSE AND REFUSAL RATES FOR THE BRFSS

Barbara M. Fernandez, MSPH and Kristie M. Hannah, MS
Macro International Inc., 126 College Street, Burlington, VT 05401

Key Words: Caller ID, response rates, telephone survey

Objective(s): This is a case study conducted to determine the impact of Caller ID on the Georgia
Behavioral Risk Factor Surveillance System (BRFSS). We hypothesized that by displaying the
text “GA Public Health” and an Atlanta-based telephone number on the Caller ID systems of
respondents, fewer telephone attempts would be required to make an initial contact with the
household (compared to the Caller ID displaying that the telephone number is “unknown”). We
also hypothesized that fewer telephone attempts would be required for an interview to be
completed or a record to be dispositioned as a “refusal.”

Method(s): We analyzed data from January and February of 2007 gathered for the Georgia
BRFSS. Approximately 50 percent of the sample for the Georgia BRFSS was randomly
assigned to receive a Caller ID display. Using paired t-tests, we looked for differences between
those numbers that received an Atlanta-based phone number on the Caller ID and those that
displayed "unknown number" in the number of attempts needed to make first contact, and to
resolve completed interviews and final refusals. The disposition of the first contact was analyzed
by the use of Caller ID with chi-square tests.

Result(s): While we found little difference in the number of attempts needed to contact a
potential respondent or complete an interview, the data on the average number of attempts
needed to finalize a refusal is conflicting. The Caller ID sample needed fewer attempts to
resolve a record in general, but more attempts to finalize a refusal. This is likely due to
differences in the number of attempts needed to identify ineligible records, such as non-working
or business numbers.

Conclusion(s): Cooperative respondents will complete the survey regardless of what they see on
their Caller ID display. On the other hand, telephone numbers which are ultimately resolved as
language barriers, impairments, non-residences, or refusals needed more attempts to finalize
when Caller ID was displayed, suggesting that such people may be more likely to screen phone
calls than they would if the display simply said “unknown.” If this is true, the use of a Caller ID
display could result in higher interviewing costs, with little benefit to response rates.


Introduction

Over the past few years, the prevalence of Caller ID has increased across the U.S. Of concern
among telephone survey researchers is that the device is used to screen calls, thus avoiding
participation in telephone surveys. The impact of a household using Caller ID to screen calls is
increased non-response bias, and survey results that are less representative of the population of
interest. Caller ID screening introduces two additional problems to telephone survey research—
first, the validity of the telephone number or eligibility of the household for successful call
screeners remains unknown; second, if more attempts are required to reach households that use
Caller ID to screen their calls, the costs to conduct research increase (Oldendick 1999).

The Behavioral Risk Factor Surveillance Survey (BRFSS) is a monthly state-based public health
survey sponsored by the Centers for Disease Control and Prevention (CDC) and conducted in
every state. As data collection costs increase, and response rates decline, telephone survey
researchers must look for methods to counteract these trends. One area of focus is to identify
protocols that increase the likelihood that a household with Caller ID will answer the telephone.
The goal of the research presented in this paper involves testing a protocol designed to counter
the potential impact of Caller ID on the BRFSS.

Caller ID is an important area of focus for telephone survey research because increasing numbers
of U.S. households employ Caller ID on their home telephones. In 1995, Tuckel and O’Neill
reported that 10 percent of U.S. households had Caller ID. According to the American
Teleservices Association (ATA), in 2001, 39 percent of U.S. households had Caller ID, and 41
percent in 2002 (2003). In 2003, the Pew Center found that 52 percent of U.S. households had
Caller ID (2004). These studies were all conducted via telephone. One must assume that the
percentage of households with Caller ID is even higher given that households that screen calls
and do not answer calls from unfamiliar individuals or organizations, most likely did not
participate in these studies. An in-person study conducted by Tuckel and O’Neill in 2000 (2001)
found that 67 percent of U.S. households had Caller ID, a much higher percentage than
telephone-only studies have been able to identify.

The use of Caller ID varies geographically and demographically, which can impact studies
differently depending upon the survey’s area of interest. In 2002, the ATA reported that
Southerners were most likely to have Caller ID (48 percent), followed by residents of the
Midwest (42 percent), West (36 percent), and the Northeast (33 percent). By age, 18-24 year-
olds were most likely to have Caller ID (57 percent), followed by 25-34 year-olds (54 percent).
Respondents aged 65 years and older were least likely to have Caller ID, at 26 percent (ATA
2003). These results from the ATA are supported by research conducted by the Pew Center,
which in 2003 found that 18-29 year-olds were most likely to have Caller ID (41 percent).
African Americans have also been shown to have higher rates of Caller ID use compared to
respondents of other racial/ethnic groups—73 percent in 2004 (Pew Center 2004). Tuckel and
O’Neill (2001) found in their 2000 study that households with Caller ID were more likely to be
under 60 years of age, African American, never married or separated/divorced, have some
college education, have children in the household, and live in the East South Central and West
South Central regions of the U.S. As the use of Caller ID varies geographically and
demographically, the potential for non-response bias increases if certain segments of Caller ID
subscribers screen their calls at different rates.

It is not enough to determine the percentages of households with Caller ID. One must
understand how households use the device—is it to avoid calls from certain individuals or
organizations, is it to only answer calls that display a phone number or organization's name (in
contrast to a display of “unknown”), or is it to answer calls only from known individuals and
organizations? The research to date has shown that a majority of households with Caller ID use
the device to screen calls. In 2004, 86 percent of U.S. households with Caller ID used the device
“at least some of the time” to screen calls, and 57 percent “always” used their Caller ID to screen
calls (Pew Center 2004). Oldendick and Link (1999) found that 66 percent of households with
Caller ID looked at the message displayed on the device before they answered the telephone.
However, it is important to understand what these households are screening for; are they not
answering any telephone number that is not recognized, or only those numbers designated as
“unknown” on the Caller ID display?. Oldendick and Link (1999) found that when the message
on the Caller ID stated “out of area” or “listing unknown”, households were more hesitant to
answer the telephone. Callegaro and McCutcheon (2005) found that response rates were higher
in an RDD study when the name of the survey organization (in that case, Gallup) was displayed
on the Caller ID compared to the control group in which no information was displayed on the
Caller ID. Tuckel (2001) found that in 2000, only 36 percent of households with Caller ID were
either “almost certain” or “very likely” to answer the telephone if it was an unrecognizable
number.

Even with the data that households with Caller ID use the device to screen calls, the literature
tends to describe the situation optimistically. Researchers are successful in contacting and
completing interviews among households that have Caller ID, and households with Caller ID
have stated that they either did not read the Caller ID display before they answered the call, or
the message that appeared did not deter them from answering (Link and Oldendick 1999). In
addition, telephone interviews have been completed with households that define themselves as
“screeners.” Thus, telephone survey researchers were successfully able to conduct an interview
even in households that consider themselves full-time screeners, and therefore the potential bias
from the use of this device may not be as significant as one might expect.

As call screening and avoidance technologies develop and become more popular, researchers
must develop protocols to bypass these devices in order to minimize non-response bias and
control increasing costs that may result from the use of this device. Tuckel and O’Neill (2001)
found that from 1995 to 2000, the percentage of respondents who answered an unrecognizable
number on their Caller ID declined noticeably; the percent of respondents who stated that they
were either “almost certain to answer” or “very likely to answer” a call with an unrecognizable
number dropped by more than 20 percentage points. For Caller ID, this means displaying a text
message of the survey sponsor or data collection agency, or a local telephone number.

A case study was conducted with the Georgia BRFSS to determine if displaying text with the
name of the survey sponsor and a local telephone number would increase the likelihood that a
household with Caller ID would answer the telephone. In January and February of 2007, the
sample for the Georgia BRFSS was split into two groups. The control group continued to display
an “unknown” telephone number in households which subscribe to Caller ID. The experimental
group displayed “GA Public Health”—along with a local, Atlanta-based telephone number.

The research was conducted to test the following hypotheses:
   ● Compared to Caller ID displaying an “unknown” number, displaying “GA Public Health”
      and a local Atlanta-based phone number would result in fewer attempts required to make
      an initial contact with the household.
   ●    Compared to Caller ID displaying an “unknown” number, displaying “GA Public Health”
       and a local Atlanta-based phone number on the Caller ID would result in fewer attempts
       required to disposition a record as a “completed interview” or a “refusal”.

Methods

The acquisition of a new dialer by Macro International Inc. (Macro) made it possible to transmit
a name and telephone number on a Caller ID display. In January and February of 2007, the
available sample for the Georgia BRFSS was randomly separated into two equal groups. The
sample designated to receive the Caller ID treatment was loaded into a study to be called on the
new dialer, which would display “GA Public Health” along with an Atlanta Based telephone
number. When dialed, this number forwards to a toll-free number ringing into one of Macro’s
telephone survey call centers. The display was tested by calling Macro employees who use
Caller ID.

The sample designated as control was loaded into a study and dialed using Macro's original
dialer. Calls originating from this dialer displayed an “unknown” telephone number. Fielding
for each study occurred simultaneously and was subject to the same BRFSS dialing protocol. A
total of 17,880 records were dialed during the two months, with 8,873 assigned to the
experimental group and 9,007 assigned to the control group. Nearly 150,000 call attempts were
made.

We tested our hypotheses using paired t-tests to examine differences by caller ID use in the
number of attempts needed to make first contact, and to resolve completed interviews, final
refusals, and all records. Chi-square tests were used to examine differences in the disposition of
the first contact was analyzed by the use of caller ID.


Results

We looked for possible evidence of call screening and avoidance by examining the percentage of
attempts resulting in “answering machine” or “no answer” dispositions. The group with Caller
ID had significantly higher rates of “answering machine” and “no answer” dispositions
(p<0.0001), as shown in the chart below.
       45%
       40%                                                 41%
       35%
       30%
                                                                 34%          Non-Caller ID
       25%         27%
                                                                              Records
             24%
       20%                         21%
                             19%                                              Caller ID
       15%                                    17% 17%
                                                                              Records
       10%

       5%
       0%
             No Answer    Answering Machine   Contact        Other
We compared the number of attempts needed to make initial contact with a potential respondent;
a significant difference could be an indication of a higher prevalence of call screening by the
group needing more attempts. Ultimately, no such evidence was found. The group with Caller
ID required a mean of 3.11 attempts to make the first contact, compared to a mean of 3.02
attempts for the control group (p=0.22).

       Mean attempts needed to contact a potential respondent
              Caller ID               Number of
               Status                 Attempts                  T         Significance
          Did Not Receive                3.016
              Display                (2.910-3.122)
         Received Display                3.113
                                     (3.000-3.225)
              Diff (1-2)                -0.097                -1.23          0.219
                                    (-0.251-0.058)


The use of a Caller ID display may influence not only the likelihood of answering the phone, but
the willingness to complete the survey. We looked for differences in the mean number of
attempts needed to complete an interview or finalize a refusal. While there was no significant
difference in the number of attempts needed to complete an interview (p=0.615), the sample with
Caller ID needed an average of 0.64 more attempts to finalize a refusal than the control sample
(p=0.003).

       Mean attempts needed to complete an interview
              Caller ID               Number of
               Status                 Attempts                  T         Significance
          Did Not Receive                5.652
              Display                (5.203-6.102)
         Received Display                5.491
                                     (5.050-5.931)
              Diff (1-2)                 0.162                0.50           0.615
                                    (-0.470-0.793)

       Mean attempts needed to finalize a refusal
              Caller ID               Number of
               Status                 Attempts                  T         Significance
          Did Not Receive                9.332
              Display                (9.044-9.619)
          Received Display                9.969
                                     (9.662-10.275)
               Diff (1-2)                 -0.637               -2.98           0.003
                                     (-1.057--0.217)


Finally, we looked at the mean number of attempts made on all finalized records as a measure of
dialing efficiency. The sample with Caller ID needed an average of 0.69 fewer attempts to
finalize a record than the control sample (p<0.001). This would seem to conflict with the Caller
ID sample needing more attempts to finalize a refusal. It is important to note that a resolved
record was defined as any telephone number in which the interviewer either completed an
interview, got a hard refusal, or learned that an interview would not be possible (i.e., the phone
number was non-working or the household was ineligible for the study). During fielding, it was
discovered that the new dialer (for which the experimental group sample was dialed from) was
more efficient at detecting non-working numbers. While both samples had similar percentages
of non-working numbers, the non-working numbers in the Caller ID sample were dispositioned
using fewer attempts, thus driving the overall mean down. When non-working numbers are
excluded from the analysis, the Caller ID sample needed an average of 0.39 more attempts to
resolve a record (p<0.007).

    Mean attempts needed to resolve a record (all resolved records)
            Caller ID                Number of
             Status                  Attempts                   T           Significance
        Did Not Receive                4.891
            Display                (4.745-5.038)
       Received Display                4.201
                                   (4.078-4.323)
            Diff (1-2)                 0.691                  7.09             <0.001
                                   (0.499-0.882)

    Mean attempts needed to resolve a record (excluding non-working)
            Caller ID                Number of
             Status                  Attempts                   T           Significance
        Did Not Receive                6.007
            Display                (5.815-6.200)
       Received Display                6.398
                                   (6.188-6.607)
            Diff (1-2)                 -0.391                 -2.70             0.007
                                  (-0.675--0.107)
Conclusions

The failure to find any difference in the number of attempts needed to complete an interview
suggests that cooperative respondents will complete the survey regardless of what they see on
their Caller ID display. On the other hand, telephone numbers which are ultimately resolved as
language barriers, impairments, non-residences, or refusals needed more attempts to finalize,
suggesting that people whose Caller ID displayed “GA Public Health” and a local telephone
number were more likely to screen phone calls than those whose Caller ID display simply said
“unknown”. If this is true, the use of a Caller ID display could result in higher interviewing costs,
with little benefit to response rates. However, these results are promising in regards to the
possibility of bias resulting from call screening given that the southern region of the U.S. has the
highest utilization rates of Caller ID (ATA 2003).

There are several limitations to this research. First, we were unable to determine whether or not
the household had Caller ID, and if they did, whether they viewed the display prior to answering
the phone. Second, a limitation for all research on Caller ID conducted via telephone is that we
were unable to reach households that successfully screened our calls via Caller ID. Ideally, the
telephone survey should be done in conjunction with in-person interviews for non-respondents,
to determine if non-respondents to the telephone portion of the survey have Caller ID, and if so,
how often they use it to screen calls. Third, the two samples were called on two different dialers,
leaving the possibility open that the dialer had some effect on the results. While the only
apparent difference is in the treatment of non-working numbers, the study should be repeated
with both samples using the same dialer in order to fully rule out a possible effect. Finally, this
research was conducted in one state, Georgia, and as the research has shown, rates of Caller ID
vary across the U.S.; thus, the results of this study may not be generalizable to other areas of the
country.

While this case study furthers the knowledge on what survey researchers can do to minimize the
impact of Caller ID, further research is recommended to identify whether there are any Caller ID
displays that may prompt a reluctant respondent to pick up the phone instead of screening the
call and not answering, and how different demographic and geographic subgroups react to the
display.

REFERENCES

American Teleservices Association. (2003). Consumer study 2002. Http://www.ataconnect.org/
IndustryResearch/ConsumerSTudy2002.html. [Accessed March 5, 2007].

Callegaro M, McCutchoen A. (2005). Who’s calling? The impact of Caller ID on telephone
survey response. 2005 proceedings of the American Association for Public Opinion Research.

Link M, Oldendick R (1999). Call screening, Is it really a problem for survey research? Public
Opinion Quarterly. 63:557-89.

Oldendick R, Link M (1999). Call screening: what problems does it pose for survey researchers?
Paper presented at the International Conference on Survey Nonresponse.
Tuckel P, O’Neill H. (2001). The vanishing respondent in telephone surveys. In Proceedings of
the Annual Meeting of the American Statistical Association.

Tuckel P, O’Neill H. (1996). Screened out. New telephone technologies erect barriers to
researchers, but it’s nothing personal. Marketing Research, B(3):34-43.

The Pew Research Center for the People and the Press. (2004). Polls face growing resistance, but
still representative. Http://people-press.org/reports/display.php3?PageID=812. [Accessed March
5, 2007].

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:9
posted:10/3/2011
language:English
pages:8