Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Sheet1 - NREL

VIEWS: 0 PAGES: 4

									Page Comment scope

      However, a high-quality evaluation should identify and mitigate all major sources
 3
      of uncertainty so that research consumers can fully assess the research rigor.


      The bulk of this chapter describes methods for minimizing and quantifying
 3
      sampling error.
 4    weighted estimates


      Precision provides convenient shorthand for expressing the interval believed to
 7    contain the estimator (for example, if the estimate is 530 kWh, and the relative
      precision level is 10%, then the interval is 530 ±53 kWh).[1]


      If the estimated outcome is large relative to its standard error, the estimator will
 9    tend to have a low small relative precision value at a given confidence level. (Low
      Small precision values are desirable.) However
 10   appendix
 11   reporting domain
      n=
      rel.
 18   2
      (ratio)
      2
 39   between 1.84 and 2.2 16 hours per day.
 53   Sample Size
Comment text                                                                            Author
This seems to combine two separate thoughts. Identification of all major sources
of uncertainty allows assessment of research rigor, mitigation is the achievement
                                                                                         Bill
of a high degree of rigor.
You might want to re-word this.
You just told me a high quality evaluation should go well beyond sampling error,
but anything beyond sampling error is relegated to an Appendix--seems                    Bill
inconsistent.
perhaps add "use of" or "application of"? Just to make wording flow better.              Bill
Re: Footnote 4
Don't most people automatically align high precision with a narrow interval? The
footnote introduces a confusion I can't remember ever encountering. I
recommend deleting the footnote since "high" and "low" don't appear in the               Bill
definition.
The counter-intuitive definition should be familiar to the intended audience for this
document.
Can we avoid the use of "high" and "low"? Small works in the context of "large"
                                                                                         Bill
used in the next sentence.
"appendices" or specify which Appendix (A,B, or C)                                       Bill
defined term? Others seem to be highlighted.                                             Bill


Is the "y" term defined?                                                                 Bill


rounding makes sense in a practical sense, but using two decimal places better
                                                                                         Bill
illustrates the math.
Perhaps should be "Sample Size after FPC"                                                Bill
Sources of Systematic Error – One important source that is not discussed is the
use of engineering assumptions, stipulated or deemed values that introduce bias
into the estimate of savings. Another source of bias not mentioned is the use of
statistical estimators that are known to be biased except under restrictive              Julie
assumptions, such as the commonly used ratio estimator. Unlike many types of
bias, the bias of the ratio estimator can be controlled by setting the sample size to
a large enough value, but it could be an issue with small strata.
It should be noted that systematic measurement error at the meter/logger or
project level may not introduce bias into the overall sample estimate if the errors      Julie
average out over the parent population of the sample units.
Random Error – One source of random error not mentioned is the error term in a
                                                                                         Julie
regression model, which is assumed to be random.
Measurement Error – The following statement (second sentence) is not true and
should be deleted or revised. Many evaluation studies do not report any
uncertainty measures besides a sampling error-based confidence interval for
estimated energy or demand savings values. This is misleading because it
                                                                                         Julie
suggests that: (1) the confidence interval describes the total of all uncertainty
sources (which is incorrect), or (2) the other sources of uncertainty are not
important relative to sampling error. Sometimes, however, uncertainty due to
measurement and other systematic sources of error can be significant. (p. 32)
One omission from this and most discussions of uncertainty in EM&V is the
treatment of overall statistical accuracy (mean square error) in terms of the
combined effect of random and systematic sources of error. The effect of bias on
the validity of confidence levels depends on the relative magnitudes of the bias
and the standard error (precision). Guidelines have been developed to assure that
bias is sufficiently controlled to maintain overall accuracy. The tradeoff between
bias and precision is an important part of this neglected topic. The preference for       Julie
the biased ratio estimator to the unbiased mean-per-unit estimator hinges on this
trade-off. In fact there are a number of biased estimators that are, under certain
assumptions, more accurate than standard unbiased estimators. On the other
hand, the essence of the argument in favor of the use of proxy variables as
opposed to assumed values is the expectation that the potential bias of the latter
will outweigh the imprecision of the former.

Page 10: “The most common regulatory requirement for precision is at the
portfolio level,” – Experience in New England, NY and mid-Atlantic has been that
PAs specify a precision requirement for each study (of one program) that
conforms to established standards endorsed by state regulators (usually +/- 10%
@ 90% confidence). For purposes of the wholesale forward capacity markets, an
                                                                                          Julie
overall level of confidence/precision is required on the demand reduction value
being bid into the market, however PAs determine this overall value through a
mathematical derivation that accounts for the precision of individual program
savings (and relative weights of each program in the portfolio). Such derivations
are conducted by expert statisticians retained by the PAs.

Sample design at the portfolio level is usually not practical because studies are
typically conducted at the program level over many years. The scope of each
study and the population parameters are often not determined until the issuance of
an RFP. The sample design for each study will also depend on changing budgets,            Julie
budget variances from completed studies, the methodological approach of the
contractor and the actual (tracked) versus planned amount of program savings
and its relative contribution to the entire portfolio.
Setting precision criteria at the study level has two distinct advantages. First it
allows the flexibility required by the considerations presented in the previous bullet.
Second it provides a measure of insurance of compliance with aggregate portfolio
                                                                                          Julie
precision requirements even if some studies do not achieve the desired level of
precision. Of course this insurance carries a cost, but the risk management benefit
is probably worth it.

								
To top