Boyle

Document Sample
Boyle Powered By Docstoc
					                         Experimental Designs Affect Preference Measures




                                             Kevin J. Boyle
                               Libra Professor of Environmental Economics
                              Department of Resource Economics and Policy




        The NOAA Contingent Valuation Panel (1993) made a number of recommendations regarding

guidelines for implementing a credible contingent-valuation study of nonuse values. Unfortunately, many

of these recommendations appear to be based on the personal heuristics of Panel members and not on

findings published in the literature or an anticipation of where the literature is going in the future. While

this is a strong critique of the Panel, they fell into the same trap that many contingent-valuation

researchers find themselves, and which was one of the contributing motivations for the Panel.

        One example is the Panels endorsement of referendum valuation questions. These are what has

been commonly referred to as dichotomous-choice questions with the payment vehicle expressed as a

referendum. Contingent-valuation researchers, myself included at times, have advocated this approach

because it is easier for people to answer. The Panel appears to have based their recommendation on an

analogy to the credibility of political polling. Unfortunately, neither of these lines of reasoning insure that

this particular questioning format will yield unbiased or minimum variance estimates of welfare.

        We have know for years that monetary incentives in contingent-valuation questions affect

welfare estimates. A dichotomous-choice question is simply the first round of an iterative-bidding

question. Boyle et al. (1985), Desvousges et al. (1983), Samples (1985 ), Rowe et al. (1980) and

Thayer (1981) all found evidence that starting bids influence the final valuation response in iterative-

                                                      -1-
bidding questions. Due to these studies, very few studies have been conducted using iterative bidding in

the last decade and dichotomous choice has become the question format of choice. Unanswered in this

shift was the effect of the magnitude of the single bid on respondents= yes/no responses to

dichotomous-choice questions.

        One step was taken back toward iterative bidding when Hanemann et al. (1991) proposed

double-bounded, contingent-valuation questions based on the statistical efficiency of having respondents

answer two bids rather than a single bid. Subsequently, Herriges and Shogren (1996) demonstrated

that the respondents anchored on the first bid when responding to the second bid; this effect is an

experimentally induced error. The question here became one of the trade off between the statistical

efficiency of responding to two bids versus the potential experimental bias of introduced by a second

bid.

        Concurrent with the work by Hanemann et al. and Herriges and Shogren, a number of

researchers were investigating the statistical properties of optimal bid designs for dichotomous-choice

questions (Alberini, 1995 a and b; Cooper, 1993; Duffield and Patterson, 1991; Kanninen, 1993 a and

b; Nyquist, 1990). Examples of these bid designs were typically demonstrated using Monte Carlo

simulations. While these simulations are free of the statistical Anoise@ of the real world and allow the

investigator to focus on the statistical properties of various designs, the simulations tell you nothing about

how people will respond to the bid levels in an actual experiment. Boyle et al. (1998) and Cooper and

Loomis (1992) have shown that differing bid designs can substantially affect estimates of central

tendency and dispersion. People do not respond to dichotomous-choice bids as mechanistic individuals




                                                     -2-
evaluating the posed bid against some known level of personal welfare; high bid levels appear to Awag

the tail@ of estimated distributions of welfare.

        Finally, Boyle et al. (1997), using data from independent applications of open-ended and

dichotomous-choice questions have shown that the magnitude of the initial bid influences the probability

of a Ayes@ response to the initial bid amount. Removing this effect can reduce welfare estimates by as

much as 40 percent. Thus, the Boyle et al. (1997) result indicates that simple, dichotomous choice

questions do not get away from the starting point bias of iterative-bidding questions, and the Herriges

and Shogren (1996) indicates that the statistical efficiency of double-bounded questions must be

weighed against the experimental error introduced by effect of the magnitude of the initial bid on the

yes/no response to the subsequent bid. These two studies collectively, I suggest, indicate most of the

effect occurs with the response to the initial bid.

        These findings, while new to contingent-valuation practitioners, are not new to individuals in

other fields of study. Shapiro (1968) reports that marketing studies in the 1940s, 1950s and 1960s

found that consumers choose higher-priced items when faced with uncertainty regarding quality and

exhibit greater satisfaction with their choices. This result should not surprise us. Contingent-valuation

surveys by their very nature present limited information and are likely to leave respondents uncertain

about what exactly they are buying when they respond to a contingent-valuation question. Moreover,

consumers are continually being prompted by advertisements that quality costs more and the initial bid in

a dichotomous-choice questions is a very clear and salient feature of the exercise. The psychology

literature also contains citations dating back to 1960s and 1970s that indicates final answers to survey

questions are functions of a starting point and question framing (Lichtenstein and Slovic, 1973; Slovic

                                                      -3-
and Lichtenstein, 1971; Tversky and Kahnemann, 1982). It seems clear that it is impossible to develop

welfare estimates that are independent of the experimental design within which they are elicited, and this

is true for other aspects of contingent valuation studies, not just the bid incentive.

        Where does this leave us? I propose that there are two potential lines of research. One

approach is to custom design survey instruments to respondents and the other approach is to minimize

the effect of particularly problematic aspects of the survey design. I will continue with the bid example

because the monetary incentive is one of the key aspects of eliciting stated preference measures of

economic welfare.

        The first approach is to customize bids for all individuals within a study. My research (Boyle et

al. 1997 and 1998) suggest the greater the distance of the proposed bid from an individuals formulated

value the greater will be the experimentally induced error from the bid. The formulated value is simply

the value that an individual would place on the item being valued in the absence of a bid. This could be

done, for example, by conducting a large pretest with an open-ended question and developing a model

that predicts values based on respondents characteristics.      Using this information, information on

respondent characteristics= could be collected in the first part of a two-stage survey process and used

to develop bids for each respondent in the second stage. This could involve a mail/mail, mail/telephone,

telephone/in person, or just a telephone implementation. Given the model predictions and the sampling

distributions around the predictions, the approaches of Alberini, Cooper and Kanninen could be used to

develop bids for the second-stage, dichotomous-choice question. This would insure that someone who

has a very low value would not get a very high bid and that the converse would not occur. This

approach, however, substantially complicates the complexity of the survey design and cost of the survey

                                                      -4-
implementation. Individual customization is also probably not possible to address other features of

contingent-valuation designs that can influence welfare estimates such as the presentation of information

about the item to be valued. I do not know of anyone else who is undertaking this line of research.

        The second approach is to embed the problematic contingent-valuation feature in the survey

design in a way that prevents respondents from focusing their responses on this particular piece of

information. One approach is the so called multiple-bounded, contingent-valuation question where

respondents are provided with a panel of bids (e.g., 10) and they are asked to answer yes/no to each

bid amount (Welsh and Poe, 1998). This format prevents respondents from focusing on a single bid

amount, but concerns arise regarding the range of bids, the intervals between bids, and whether

respondents center their responses in the panel. A paper by Rowe et al. 1996, using payment card

data, suggests that a panel of bids does not lead to range or centering bias. However, Alberini et al.

(1998) used independent implementations of a multiple-bounded question where one sample received a

panel that went from low to high dollar amounts and the other panel went from high to low dollar

amounts. The estimated response distributions were significantly different and the high to low panel

yielded higher welfare estimate. Thus, going from one bid to a panel of bids may reduce the effect of

the single bid on welfare estimates, but the panel itself may introduce another type of experimental error.

Poe and Welsh do note that the panel leads to tighter confidence bounds than occurs with a single bid.

As with the double-bounded question noted above, we are trading off relative biases and efficiency, not

totally removing experimental effects.

        Another approach is to use conjoint analyses where a number of attributes of the heterogeneous

valuation commodity are explicitly specified and the bid incentive simply becomes one of several

                                                   -5-
elements (Johnson and Desvousges, 1997; Lareau and Rae, 1986; Mackenzie, 1993; Peterson and

Brown, 1998; Roe et al., 1996). While some people have heuristically suggested that conjoint will

minimum the effect of the bid, no one to my knowledge has investigated this issue. Alternatively, we

(Boyle et al., 1998) found that while ranking, rating and choose-one response options for conjoint

questions lead to different structural models, they do not lead to significantly different estimates of

welfare. This later result appears to be due to the relatively large confidence bounds on the estimates of

central tendency. Again, the trade off involves relative biases and efficiency.

        While question formats that embed the bid in the experimental design may reduce the induced

experimental error from this design feature, there is no research that has established this point. Moving

to multiple bounded questions without looking at panel effects is the same as the previous mistake of

falling back to a simple, dichotomous-choice question from iterative bidding to get away from the

starting point effects. Switching from contingent valuation to conjoint analysis is akin to using voting

research to support dichotomous-choice questions. Voting generally does not involve a specific cost to

the individual voter. Traditional conjoint analyses have not focused specifically on the monetary

incentive attribute, which is necessary for welfare analyses.




                                                     -6-
        All of these results indicate that there will always be experimental design effects in the elicitation

of stated preferences. The questions really are:

       Where do the largest experimentally induced errors occur?

       What other errors arise as you try to minimize the larger experimental errors?

In considering these questions, it is necessary to consider trade offs in terms of estimates of central

tendency and dispersion. Moreover, I would argue that there is more of a need to improve designs and

understand design effects on stated preferences than there is a need for more sophisticated econometric

models to disentangle the experimental effects of poorly designed studies. I would, however,

acknowledge that some of the improved designs may require advanced econometric techniques to

analyze the resultant data.




                                                     -7-
                                             References

Alberini, Anna. 1995a. AOptimal Designs for Discrete Choice Contingent Valuation Surveys: Single-
Bound, Double-Bound, and Bivariate Models.@ Journal of Environmental Economics and Management
Vol. 28, No. 3: 287-306.

Alberini, Anna. 1995b. AWillingness-to-Pay Models of Discrete Choice Contingent Valuation Survey
Data.@ Land Economics Vol. 71, No. 1: 83-95.

Alberini, Anna, Kevin J. Boyle and Michael P. Welsh. 1998. ADesign and Analysis of Nonmarket
Valuation Data: Multiple Bids and Uncertainty.@ Unpublished paper.

Boyle, Kevin J., Richard C. Bishop, and Michael P. Welsh. 1985. AStarting Point Bias in Contingent
Valuation Bidding Games.@ Land Economics Vol. 61, No. 2: 188-194.

Boyle, Kevin J., Thomas P. Holmes, Mario F. Teisl, Brian Roe, Shelley Phillips-Mills, and Genevieve
Pullis. 1998. AAssessing Public Preferences for Timber Harvesting Using Conjoint Analysis: A
Comparison of Response Formats.@ Select paper, World Congress of Environmental Economists,
www.worldcongress.feem.it.

Boyle, Kevin J., F. Reed Johnson, and Daniel W. McCollum. 1997. AAnchoring and Adjustment in
Single-Bounded, Contingent-Valuation Questions.@ American Journal of Agricultural Economics Vol.
79, No. 5: 1495-1500.

Boyle, Kevin J., Hugh F. MacDonald, Hsiang-tai Cheng, and Daniel W. McCollum. 1998. ABid
Design and Yea Saying in Single-Bounded, Dichotomous-Choice Questions.@ Land Economics Vol.
74, No. 1: 49-64.

Cooper, Joseph C. 1993. AOptimal Bid Selection for Dichotomous Choice Contingent Valuation
Surveys.@ Journal of Environmental Economics and Management Vol. 24, No. 1: 25-40.

Cooper, Joeseph C., and John Loomis. 1992. >Sensitivity of Willingness-to-pay Estimates to Bid
Designs in Dichotomous Choice Contingent Valuation Models.@ Land Economics Vol. 68, No. 2:
211-224.

Desvousges, William H., V. Kerry Smith, and Matthew P. McGivney. 1983. AA Comparison of
Alternative Approaches for Estimating Recreation Benefits of Water Quality Improvement.@ Research
Triangle Institute. Report to the U.S. Environmental protection Agency, EPA-230-05-83-001,
Washington, D.C.

Duffield, John W., and David A. Patterson. 1991. AInference and Optimal Design for a Welfare

                                                 -8-
Measure in Dichotomous Choice Contingent Valuation.@ Land Economics Vol. 67, No. 2: 225-239.
Hanemann, W. Michael, John Loomis and Barbara Kanninen. 1991. AStatistical Efficiency of Double-
Bounded Dichotomous Choice Contingent Valuation.@ American Journal of Agricultural Economics
Vol. 73, No. 4: 1255-1263.

Herriges, Joseph A., and Jason A. Shogren. 1996. AStarting Point Bias in Dichotomous Choice
Valuation with Follow-up Questioning.@ Journal of Environmental Economics and Management Vol.
30, No. 1: 112-131.

Johnson, F. Reed, and William H. Desvousges. 1997. AEstimating Stated Preferences with Rated-Pair
Data: Environmental, Health, and Employment Effects of Energy Programs.@ Journal of Environmental
Economics and Management Vol. 34, No. 1:79-99.

Kanninen, Barbara J. 1993a. ADesign of Sequential Experiments for Contingent Valuation Studies.@
Journal of Environmental Economics and Management Vol. 25, No. 1: S1-S11.

Kanninen, Barbara J. 1993b. AOptimal Experimental Design for Double-Bounded Dichotomous
Choice Contingent Valuation.@ Land Economics Vol. 69, No. 2:138-146.

Lareau, T. J., and Douglas A. Rae. 1986. AValuing WTP for Diesel Odor Reductions: An Application
of Contingent Ranking Technique.@ Southern Economic Journal :728-742

Lichtenstein, Sara, and Paul Slovic. 1973. AResponse-Induced Reversals of preference in Gambling:
An Extended replication in Las Vegas.@ Journal of Experimental Psychology Vol. 3: 16-20.

MacKenzie, John. 1993. AA Comparison of Contingent Preference Models.@ American Journal of
Agricultural Economics Vol. 75, No. 3: 593-603.

NOAA Contingent Valuation Panel. 1983. ANatural Resource Damage Assessments Under the Oil
Pollution Act of 1990.@ Federal Register Vol. 58, No. 10: 4601-4614.

Nyquist, H. 1990. AOptimal Designs of Discrete Response Experiments in Contingent Valuation
Studies.@ Review of Economics and Statistics Vol. 74, No. 3: 559-563.

Peterson, George L., and Thomas C. Brown. 1998. AEconomic Valuation by the Method of Paired
Comparison, with Emphasis on Evaluation of the Transitivity Axiom.@ Land Economics Vol. 74, No.
2:240-261.

Roe, Brian, Kevin J. Boyle and Mario F. Teisl. 1996. AUsing Conjoint Analysis to derive Estimates of
Compensating Variation.@ Journal of Environmental Economics and Management Vol 31, No. 2:145-
159.

                                                -9-
-10-
Rowe, Robert D., Ralph C. d=Arge, and David S. Brookshire. 1980. AAn Experiment in the Value of
Visibility.@ Journal of Environmental Economics and Management Vol 7, No. 1: 1-19.
.
Rowe, Robert D., William D. Schulze, and William F. Breffle. 1996. AA Test for Payment Card
Biases.@ Journal of Environmental Economics and Management Vol 31, No. 2: 178-185.

Samples, Karl C. 1985. AA Note on the Existence of Starting Point Bias in Iterative Bidding Games.@
Western Journal of Agricultural Economics Vol. 10, No. 1: 32-40.

Shapiro, B. P. 1968. AThe Psychology of Pricing.@ Harvard Business Review Vol. 46: 14-25.

Slovic, Paul, and Sara Lichtenstein. 1971. AComparisons of Bayesian and regression Approaches to
the Study of Information Processing in Judgement.@ Organizational Behavior and Human Performance
Vol. 44: 62-68.

Thayer, Mark A. 1981. AContingent Valuation Techniques for Assessing Environmental
Improvements: Further Evidence.@ Journal of Environmental Economics and Management Vol.8, No.
1: 27-43.

Tversky, Amos, and Daniel Kahnemann. 1982. AThe Framing of Decisions and the Psychology of
Choice.@ In Question Framing and Response Consistency, R.M. Hogarth (ed). San Francisco: Jossey-
Bass Publishers.

Welsh, Michael P., and Gregory L. Poe. 1998. AElicitation Effects in Contingent Valuation:
Comparisons to a Multiple Bounded Discrete Choice Approach.= Journal of Environmental
Economics and Management Vol. 36, No. 2: 170-185.




                                                -11-

				
DOCUMENT INFO