STANDARDIZED INFECTION RATIO AND RATERATIO COMPARISONS

Document Sample
STANDARDIZED INFECTION RATIO AND RATERATIO COMPARISONS Powered By Docstoc
					STANDARDIZED INFECTION RATIO AND RATE/RATIO COMPARISONS
Adapted from “Methods of Comparing Nosocomial Infection Rates” by David H. Culver, PhD, Chief, Statistics and Information Systems Branch, Hospital Infections Program, Centers for Disease Control and Prevention; presented at a SHEA pre-conference workshop in April 1996.

In this handout we will discuss the comparison of surgical site infection (SSI) rates, define the Standardized Infection Ratio (SIR), and discuss the comparison of SIRs. The hypothesis testing

methods described here are general in the sense that they apply to the comparison of any two proportions. Hence, the same methods can

be used, for example, to perform internal comparisons of SSI rates or SIRs between surgeons or comparison of the SSI rates or SIRs of the same surgeon at two different points in time, keeping in mind that all comparisons must be done on risk stratified SSI data.

I.

Comparing SSI Rates Within a Particular Procedure-Risk Category Of the SP Component

To illustrate the statistical methods, let's assume that we have been following cardiac surgeries and coronary artery bypass grafts under the Surgical Patient (SP) component and the following report has been prepared on the SSI experience of the patients of a team of our cardiac surgeons, Team A: v1.6 1

Table 1:

Infection Control Report Team A
Number of SSIs Number of Operation NNIS Rate

Procedure

Risk Category

SSI Rate

CARD CARD CBGB CBGB CBGB TOTAL

0,1 2,3 0 1 2,3

3 3 1 10 5 22

80 20 10 230 60 400

3.75 15.00 10.00 4.35 8.33 5.50

2.02 5.29 1.59 3.15 5.76 ----

  Team A performed 400 operations over a three-month time period, and their patients experienced 22 SSIs for an overall SSI rate of 5.5%, but we know that this overall SSI rate is not a comparative rate. In order to compare their rates with that of other cardiac surgical teams, individual surgeons, and with NNIS, we have partitioned their operations by procedure and risk index and calculated SSI rates in each procedure-risk category. In each of the

procedure-risk categories, notice that their SSI rate exceeds the pooled mean rate of NNIS. But also notice that the volume of surgery done by Team A over this short time span was quite low, less than 100 operations in all but one of the categories (CBGB-1). One or at most two fewer SSIs in any of these categories would have brought their SSI rate down to, or below the level of, the NNIS rate. If Team A's surgical volume had been ten times greater

v1.6

2

(4,000 operations), and their rates were the same as in Table 1, then perhaps we would feel that there is compelling evidence that their rates exceed those of NNIS and signal a need for further investigation. However, based on the relatively small sample sizes at hand, can we draw such a conclusion?

To put it another way, if we were to present Table 1 to Team A and point out that their SSI rate following cardiac surgery on patients with fewer than two risk factors (CARD-0,1) was nearly double the NNIS rate (3.75% vs. 2.0%), their reaction might well be:

"3.75% -- so what!

That's only three months of surgery.

Over the long run, our rate is only 2%."

How can we respond to such a claim?

Statisticians have developed,

and epidemiologists use, a method for answering this question called a hypothesis test. Let us assume for the moment that this

claim (hypothesis) is true and that among a large number of operations performed by Team A, perhaps several years worth of surgical experience, their SSI rate in this procedure-risk category would indeed be 2%. then the following: The question posed by the epidemiologist is If we were to select randomly a sample of 80

procedures from this large pool of operations, what are the chances that three or more of those 80 operations would result in an SSI?

v1.6

3

In other words, what is the probability that by chance alone we could obtain an infection rate (3/80 = 3.75%) as large or larger than the one experienced by Team A over the past quarter? In

short, just how unusual would Team A's SSI rate over the past quarter be, if their long-term rate is really only 2%? Figure 1 is

a graphical depiction of this hypothetical sampling experiment and the question posed by the epidemiologist. Figure 1.

v1.6

4

The probability of obtaining three or more SSIs in a sample of 80 operations, based entirely upon the assumption that Team A's longterm rate is NOT different from that of NNIS (2%), is called the "p-value." If this p-value is very small, implying that the recent unusual under this

experience of Team A's patients was very

assumption, then we regard it as evidence that the claim of Team A is wrong. Indeed, the smaller the p-value the stronger is the

evidence against the claim and in support of the conclusion that the long-term SSI rate of Team A must really be larger than the NNIS rate.

How small must the p-value be before we conclude that the SSI rate of Team A is "significantly greater" than the NNIS rate? A p-value

less than 0.05 (1 in 20 chance) is often chosen as a convenient cut point for rejecting the claim of no difference between the rates (long-term rate of Team A vs. NNIS rate), but this choice is arbitrary. Thus, while we may use this convenient cut point to

illustrate the interpretation process, in practice we simply report the p-value and interpret its value as a measure of the strength of the evidence against the hypothesis or claim being tested.

How do we perform this hypothesis test and calculate the resulting p-value?

v1.6

5

Z-test

Let

r =

i * 100 n

be your rate (e.g., Team A)

And

Let

R =

I * 100 N

be the NNIS rate.

Note: * means multiply by or “times”.

In this notation, i = no. of SSIs in your rate n = no. of operations in your rate I = no. of SSIs in NNIS rate and, N = no. of operations in NNIS rate.

Rates r and R are the two proportions that we wish to compare. calculate the following Z-statistic:

Now

Z =

r - R - [50 *(1/n + 1/N)] P *(100 - P) *(1/n + 1/N)

(Formula 1)

v1.6

6

Where

P =

i+I *100 n+N
In the of the

is the result of pooling your rate with this NNIS rate. numerator of Formula 1, |r-R| is the absolute value

difference between the two rates, i.e., ignore the sign (+) of the difference in the rates. [50*(1/n+1/N)], correction. is called The second term in the numerator, the continuity correction or Yates

If the numerator of the Z-statistic ends up negative,

e.g., |r-R|< [50*(1/n + 1/N)], just set Z = 0.

If the null hypothesis (no significant difference between the rates) is true, the value of Z should be very small. Large values

of Z indicate strong evidence against the null hypothesis.

The value of Z calculated from Formula 1 is to be compared against the unit-normal distribution (also called the Z-curve, standardized normal curve, or bell curve) to obtain its associated p-value. bell curve is The called the is reference the area distribution under the for the The Z-

statistic.

p-value

unit-normal

distribution to the right of the Zstatistic. These areas can be

obtained from Table 2.

v1.6

7

Table 2: Areas (Pr (Z > z)) Under the Unit-Normal Distribution

z 0.0x 0.1x 0.2x 0.3x 0.4x 0.5x 0.6x 0.7x 0.8x 0.9x 1.0x 1.1x 1.2x 1.3x 1.4x 1.5x 1.6x 1.7x 1.8x 1.9x 2.0x 2.1x 2.2x 2.3x 2.4x 2.5x 2.6x 2.7x 2.8x 2.9x 3.0x 3.1x 3.2x 3.3x 3.4x 3.5x 3.6x 3.7x 3.8x 3.9x 4.0x

x=0 0.500000 0.460172 0.420740 0.382089 0.344578 0.308538 0.274253 0.241964 0.211855 0.184060 0.158655 0.135666 0.115070 0.096800 0.080757 0.066807 0.054799 0.044565 0.035930 0.028717 0.022750 0.017864 0.013903 0.010724 0.008198 0.006210 0.004661 0.003467 0.002555 0.001866 0.001350 0.000968 0.000687 0.000483 0.000337 0.000233 0.000159 0.000108 0.000072 0.000048 0.000032

x=1 0.496011 0.456205 0.416834 0.378280 0.340903 0.305026 0.270931 0.238852 0.208970 0.181411 0.156248 0.133500 0.113139 0.095098 0.079270 0.065522 0.053699 0.043633 0.035148 0.028067 0.022216 0.017429 0.013553 0.010444 0.007976 0.006037 0.004527 0.003364 0.002477 0.001807 0.001306 0.000935 0.000664 0.000466 0.000325 0.000224 0.000153 0.000104 0.000069 0.000046 0.000030

x=2 0.492022 0.452242 0.412936 0.374484 0.337243 0.301532 0.267629 0.235762 0.206108 0.178786 0.153864 0.131357 0.111232 0.093418 0.077804 0.064255 0.052616 0.042716 0.034380 0.027429 0.021692 0.017003 0.013209 0.010170 0.007760 0.005868 0.004396 0.003264 0.002401 0.001750 0.001264 0.000904 0.000641 0.000450 0.000313 0.000216 0.000147 0.000100 0.000067 0.000044 0.000029

x=3 0.488034 0.448283 0.409046 0.370700 0.333598 0.298056 0.264347 0.232695 0.203269 0.176186 0.151505 0.129238 0.109349 0.091759 0.076359 0.063008 0.051551 0.041815 0.033625 0.026803 0.021178 0.016586 0.012874 0.009903 0.007549 0.005703 0.004269 0.003167 0.002327 0.001695 0.001223 0.000874 0.000619 0.000434 0.000302 0.000208 0.000142 0.000096 0.000064 0.000042 0.000028

x=4 0.484047 0.444330 0.405165 0.366928 0.329969 0.294599 0.261086 0.229650 0.200454 0.173609 0.149170 0.127143 0.107488 0.090123 0.074934 0.061780 0.050503 0.040930 0.032884 0.026190 0.020675 0.016177 0.012545 0.009642 0.007344 0.005543 0.004145 0.003072 0.002256 0.001641 0.001183 0.000845 0.000598 0.000419 0.000291 0.000200 0.000136 0.000092 0.000062 0.000041 0.000027

x=5 0.480061 0.440382 0.401294 0.363169 0.326355 0.291160 0.257846 0.226627 0.197663 0.171056 0.146859 0.125072 0.105650 0.088508 0.073529 0.060571 0.049471 0.040059 0.032157 0.025588 0.020182 0.015778 0.012224 0.009387 0.007143 0.005386 0.004025 0.002980 0.002186 0.001589 0.001144 0.000816 0.000577 0.000404 0.000280 0.000193 0.000131 0.000088 0.000059 0.000039 0.000026

x=6 0.476078 0.436441 0.397432 0.359424 0.322758 0.287740 0.254627 0.223627 0.194895 0.168528 0.144572 0.123024 0.103835 0.086915 0.072145 0.059380 0.048457 0.039204 0.031443 0.024998 0.019699 0.015386 0.011911 0.009137 0.006947 0.005234 0.003907 0.002890 0.002118 0.001538 0.001107 0.000789 0.000557 0.000390 0.000270 0.000185 0.000126 0.000085 0.000057 0.000037 0.000025

x=7 0.472097 0.432505 0.393580 0.355691 0.319178 0.284339 0.251429 0.220650 0.192150 0.166023 0.142310 0.121000 0.102042 0.085343 0.070781 0.058208 0.047460 0.03836 4 0.030742 0.024 419 0.019226 0.015003 0.011604 0.008894 0.006756 0.005085 0.003793 0.002803 0.002052 0.001489 0.001070 0.000762 0.000538 0.000376 0.000260 0.000178 0.000121 0.000082 0.000054 0.000036 0.000024

x=8 0.468119 0.428576 0.389739 0.351973 0.315614 0.280957 0.248252 0.2176 95 0.189430 0.16 3543 0.140071 0 .119000 0.100273 0.083793 0.069437 0.057053 0.046479 0.037538 0.030054 0.023852 0.018763 0.014629 0.011304 0.008656 0.006569 0.004940 0.003681 0.002718 0.001988 0.001441 0.001035 0.000736 0.000519 0.000362 0.000251 0.000172 0.000117 0.000078 0.000052 0.000034 0.000023

x=9 0.464144 0.424655 0.385908 0.348268 0.312067 0.277595 0.245097 0.214764 0.186733 0.161087 0.137857 0.117023 0.098525 0.082264 0.068112 0.055917 0.045514 0.036727 0.029379 0.023295 0.018309 0.014262 0.011011 0.008424 0.006387 0.004799 0.003573 0.002635 0.001926 0.001395 0.001001 0.000711 0.000501 0.000349 0.000242 0.000165 0.000112 0.000075 0.000050 0.000033 0.000022

v1.6

8

Example 1: CARD-0,1

r = 3/80 * 100 = 3.75% R = 103/5088 * 100 = 2.02%

(i = 3,

n = 80)

(I = 103, N = 5088)

P =
And

3+103 *100 = 2.05% 80+5088

Z =

3.75-2.02 -50*(1/80 + 1/5088) 2.05 (100-2.05)(1/80 + 1/5088)

=

1.73-0.63 2.549

= 0.69

Hence, p-value = 0.25

Example 2: CARD-2,3 r = 3/20 * 100 = 15.00% R = 63/1191 * 100 = 5.29% (i = 3, n = 20) (I = 63, N = 1191)

P =

3+63 * 100 = 5.45% 20+1191

And

Z =

15.00-5.29 -50*(1/20 + 1/1191) 5.45(100-5.45)(1/20 + 1/1191)

=

9.71-2.54 26.1771

=

1.40

Hence, p-value = 0.08

v1.6

9

Try

calculating

the

Z-statistic

and

looking

up

the

p-value

yourself: Example 3: CBGB - 0 r = R = 1.59%* P = And Z = (i = , n = ) )

(I = 13 , N = 819

Hence, p-value = *The NNIS data should be obtained from the latest published report.

Underlying Assumptions of the Z-Test In order to use the Z-test, the data being compared must be normally distributed. In general, when the sample sizes (n, N) are We can easily check our

greater than 30, this will be the case.

data to see if they meet the condition for normalcy by displaying them in a 2x2 table and calculating the minimum expected cell frequency (explained below). If the minimum expected cell

frequency is greater than 1, then we have evidence that our data are distributed normally and we can use the p-value obtained from the Z-test.

Displaying Data in a 2x2 Table Whenever you compare two proportions (or percentages), the data can v1.6 10

always be displayed in a 2x2 table format:

No. Observed w/ SSI

No. w/o SSI

No. of Oper

Hospital

i

n-i

n [Row Total]

NNIS

I [Column Total]

N-I [Column Total]

N [Row Total] [Table Total]

The numbers in the four cells of this table (i, n-i, I, N-I) are called the observed cell frequencies. The assumptions (i.e.,

statistical theory) that underlie the Z-test of Formula 1 are valid unless one of the sample sizes (n,N) is so small that the expected frequency (e) of SSI among the four cells is less than 1. An easy

way to check on this condition is to calculate the expected frequency for the smallest number in any cell of the table, i.e., the minimum expected frequency (emin) using the formula below:
Row Total * Column Total Table Total

emin =

Once the value of this cell is known, the rest of the cells can be filled in since the marginals (row, column, and table totals) do not change.

v1.6

11

Example 1: CARD-0,1 No. Observed w/ SSI Hospital NNIS 3 103 106 No. w/o SSI 77 4985 5062 No. of Oper.
Example 1: emin = 80 * 106 = 1.64 5168

80 5088 5168

No. Expected w/ SSI Hospital NNIS 1.64

No. w/o SSI 78.36

No. of Oper.

80 5088 5168

104.36 4983.64 106 5062

Example 2: Example 3:

emin = 20 * 66 /1211 = 1.09 emin = 10 * 14 / 829 = 0.17

As you can see, the Z-test of Formula 1 is valid in Examples 1 and 2, but not in Example 3, because emin<1.

v1.6

12

Fisher’s Exact Test The Fisher’s Exact Test is an alternative hypothesis testing procedure whose assumptions are always met. Therefore, it can

always be used, even when the minimum cell frequency is less than 1. However, since the calculation of its p-value is beyond human The the

patience, it requires us to use good statistical software. reference distribution used in Fisher's Exact the Test is

hypergeometric distribution.

distribution,

rather

than

unit-normal

Let's return to our Infection Control Report for Team A.

Here is a

version of that report showing the minimum expected frequency (emin) and the p-value obtained from the Z-test and Fisher's Exact Test for each of the five procedure-risk categories:

Table 3: Infection Control Report (with p-values) -- Team A Procedure-Risk SSI Rate(%) SSI Rate (%) Category Team A NNIS CARD-0,1 CARD-2,3 CBGB-0 CBGB-1 CBGB-2,3 TOTAL v1.6 3/80=3.75 3/20=15.00 1/10=10.00 103/5088=2.0 63/1191=5.3 13/819=1.6 Min Exp p-value p-value Freq (Z-test) (Fisher) 1.64 1.09 0.17 0.25 0.08 0.21 0.20 0.28 --0.23 0.09 0.16 0.19 0.26 ---

10/230=4.35 1010/32065=3.1 7.24 5/60=8.33 22/400=5.50 446/7745=5.8 ----------13 3.47 ---

As you can see, none of these p-values is lower than the arbitrary cut point of 0.05, so we would say that none of Team A's SSI rates are "significantly greater" than the NNIS rates. Therefore, given

the number of operations performed by Team A during this quarter, we cannot conclude that their long-term rates are really larger than those of NNIS.

Use of Epi Info There are two programs in Epi Info that are useful in implementing the methods of this section: EPITABLE and STATCALC. Both require

you to enter the observed cell frequencies into a 2x2 table: No. Observed w/ SSI Hospital i NNIS I No. w/o SSI n-i N-I No. of Oper. n N

In the EPITABLE program, follow the path Probability ---> Fisher's Exact Test, and enter the observed frequencies into the cells of the 2x2 table. Press Calculate and the one- and two-tailed p-

values of the Fisher's Exact Test are displayed; report the onetailed p-value. The output for CBGB-0 is shown below:

v1.6

14

Alternatively, from the EPITABLE program, follow the path Compare ---> Proportion ---> Percentage ----> choose 2 samples. enter the rates (r,R) and the sample sizes (n,N) into Then the

appropriate boxes.

When e < 5, the Yates corrected chi-square

statistic is calculated and its associated p-value is displayed. The Yates corrected chi-square statistic is just the square of the Z-statistic (chi-square = Z2) in Formula 1 and the p-value

associated with this chi-square statistic is twice the p-value associated with the Z-statistic. Hence, divide the p-value by 2

and you will have the one-tailed p-value associated with the Zstatistic. v1.6 The output for CARD-0,1 is shown below: 15

When e  5, the continuity correction in Formula 1 is ignored and the uncorrected Pearson chi-square statistic and associated p-value are calculated and displayed. The output for CBGB-1 is shown next:

v1.6

16

In the STATCALC program, choose Tables (2x2,2xn) and enter the observed frequencies into the cells of the table. calculate. Press enter to

You’ll note that many statistics are given, including

odds ratio and relative risk and their confidence intervals. Ignore these. Three chi-square statistics are listed: uncorrected, As mentioned above, the

Mantel-Haenszel, and Yates corrected.

Yates corrected chi-square is the square of the Z-statistic of Formula 1 and the p-value is twice the p-value associated with the Therefore, simply divide the Yates corrected chiby 2 and you’ll have the one-tailed p-value

Z-statistic. square

p-value

associated with the Z-statistic.

When e<5, the Fisher's Exact Test

p-values are also calculated and displayed; use the one-tailed pvalue. The output for CBGB-0 is shown below:

v1.6

17

Other Applications of These Methods The Z-test of Formula 1 and Fisher's Exact Test are appropriate whenever two proportions (i.e., percentages) are being compared, provided a denominator-based sampling design, such as cohort

sampling, has been used to obtain the data. Consequently, these methods can be used to perform internal comparisons of SSI rates, such as comparison between two surgeons or between two time periods for the same surgeon. As always, keep in mind that such comparisons must be done for a specific procedure and risk category (i.e., only on the risk-stratified SSI rate).

In the ICU and HRN surveillance, the device utilization ratios are proportions, even though both the numerator and the denominator involve the counting of patient-days. As a result, the methods of

this section can be used to perform both external and internal comparisons of these measures of device utilization.

II.

The Standardized Infection Ratio: A Useful Risk-Adjusted Summary Measure for Surgical Site Infections

Definition of the SIR There is another tool available to us for comparing SSI rates called the Standardized Infection Ratio (SIR). To introduce the SIR, let’s return to Table 3, the infection control report for v1.6 18

Cardiac Surgery Team A discussed earlier. is reproduced below:

For convenience, Table 3

Table 3: Infection Control Report (with p-values)--Team A
Risk Number of Number of Category SSIs Opers SSI Rate NNIS Rate

Procedure

p-value

CARD CARD CBGB CBGB CBGB TOTAL

0,1 2,3 0 1 2,3

3 3 1 10 5 22

80 20 10 230 60 400

3.75 15.00 10.00 4.35 8.33 5.50

2.02 5.29 1.59 3.15 5.76 ----

0.25 0.08 0.16 0.20 0.28

The best method of comparing the SSI rates of Team A with those of NNIS is to do so within each of the procedure-risk categories, as illustrated in this table. If any of Team A's rates had been

significantly higher than those of NNIS, we would have known immediately the type of procedure being performed and the group of patients for which the potentially excessive infections were

occurring. This would be a useful starting point for further investigation.

Another method of comparing Team A's experience with that of NNIS is to focus on the 22 infections that occurred collectively among their 400 patients. v1.6 How many infections would we have "expected" 19

to occur among these patients, taking into account the type and number of procedures performed (100 CARD and 300 CBGB) and the risk categories of the patients? We can calculate the "expected" number of SSIs as follows. Cardiac surgery was performed on 80 patients in risk category 0,1. According to the pooled NNIS rate, the risk of an SSI for each of these patients was 2.02%. Hence, the expected

number of SSIs among these 80 patients was 80 * 0.0202 = 1.6. Multiplying the number of operations performed by Team A by the NNIS rate in each row, we get the last column of Table 4. Summing

the numbers in the last column, we see that the expected number of SSIs among all 400 operations performed by Team A was 13.6.

Table 4: Infection Control Report -- Team A
Risk Category Number of SSIs Number of Opers SSI Rate NNIS Rate Expected p-value # of SSIs

Procedure

CARD CARD CBGB CBGB CBGB TOTAL

0,1 2,3 0 1 2,3

3 3 1 10 5 22

80 20 10 230 60 400

3.75 15.00 10.00 4.35 8.33 5.50

2.02 5.29 1.59 3.15 5.76 ----

0.25 0.08 0.16 0.20 0.28

1.6 1.1 0.2 7.2 3.5 13.6

The ratio of the observed number of SSIs that occurred (22) to the expected number (13.6) is called the Standardized Infection Ratio.

v1.6

20

SIR =

Observed Number of SSIs 22 = = 1.62 Expected Number of SSIs 13.6

The Standardized Infection Ratio is deceptively simple. It is an easy to interpret summary measure of the SSI experience of an individual surgeon, service, or hospital. Values that exceed 1.0

indicate that more infections occurred than were expected (and by how much), whereas values that are less than 1.0 indicate the opposite. expected. For Team A, the 22 SSIs represent 62% more than were In calculating the expected number of SSIs, we account

for the type of procedures performed and the distribution of patients by risk index, i.e., case mix. Hence, the SIR is a

risk-adjusted summary measure and can be used for comparative purposes. In contrast, the overall SSI rate (22/400 = 5.5% for Team A) is NOT a comparative rate and should not be used for comparative purposes.

How can we use the SIR?

First of all, it can be compared against

its nominal value of 1.0 and a p-value calculated to determine if the observed number of SSIs significantly exceeds the expected number of SSIs, or whether the discrepancy between them might well be explained by chance alone. This is an external comparison since the nominal value of 1.0 represents perfect conformity with the pooled mean rates of NNIS, i.e., the number of observed SSI = the number of expected SSI. v1.6 21

Comparing a Standardized Infection Ratio Against Its Nominal Value of 1.0

Let

O = observed number of SSIs, E = expected number of SSIs,

and

SIR = O/E

As illustrated for Team A, the expected number of SSIs is always calculated by multiplying the number of operations performed in each procedure-risk category by the NNIS rate of that procedurerisk category and summing the products. To test whether or not the

SIR differs significantly from its nominal value of 1.0, there are two methods that can be used, a Z-test and the Poisson test.

Z-test Let

Z  2 O  E





(Formula 2)

Ignore the sign (+) of Z, i.e., if Z is negative, just drop the minus sign. Take the magnitude of Z and refer to the unit-normal If SIR > 1 (O > E),

distribution (Table 2) to obtain the p-value.

then the p-value indicates how strongly the data support the conclusion that the observed number of SSIs significantly exceeds the expected number of SSIs. Likewise, if SIR < 1 (O < E), then

v1.6

22

the

p-value

indicates

the

strength

of

the

evidence

for

the

conclusion in the opposite direction, that the observed number of SSIs is significantly below the expected number of SSIs.

For Team A, we get

Z = 2( 22 -

13.6) = 2.01

and the p-value = 0.02.

The number of SSIs that occurred (22) was significantly greater than would be expected based upon the aggregate experience of cardiac surgeons in NNIS hospitals. Hence, although none of the

five procedure-risk category comparisons resulted in a small enough p-value for us to conclude that any of Team A's category-specific rates differed from those of NNIS, collectively there is evidence that an excess of SSIs may be occurring among their patients. Obviously, the collective evidence stems from the fact that a total of 400 operations were performed and each of their five category-specific SSI rates exceeded the corresponding rate of NNIS.

Poisson Test in Epi Info The Z-test of Formula 2 gives an approximate p-value, which should be good for all practical purposes unless the expected number of v1.6 23

SSIs (E) is less than 1.

An exact p-value can be obtained by

performing another type of statistical test known as a Poisson test. Once again, the name of this test comes from the fact that

the reference distribution used to obtain the p-value is the Poisson distribution.

If SIR > 1, then the p-value of the Poisson test is Pr(X  O), where X is assumed to have a Poisson distribution with mean equal to E, the expected number of SSI. When SIR < 1, the p-value is

Pr(X  O), where again X is assumed to have a Poisson distribution with mean equal to E.

It is easy to get the exact p-value of the Poisson test using EpiInfo. In the EPITABLE program, follow the path Probability ---> Poisson and enter the value of O for the "observed number of events" and value of E for the "expected number of events”. The pvalue is displayed as the "probability that the number of events found is”  O (when O > E) or  O (when O < E). For Team A, the

Poisson test gives us a p-value of 0.02, which agrees with the Ztest (see output on the top of p. 26).

v1.6

24

Try calculating the Z-statistic and looking up the p-value for yourself: Example 4: Surgeon B performed 50 cholecystectomies with the following SSI experience:

NNIS Risk Category #SSI #Oper Rate Rate
(1)

Expected #SSI

0 1 2 3

1 2 0 3 ____ 6

20 20 0 10 _____ 50

5.0 10.0 ---30.0 _____ 12.0

0.86 1.81 3.55 6.25 ____

(1)

# of SSIs per 100 operations

SIR =

Z =

p-value =

Poisson test: p-value =

(from the output on the bottom of p.26)

v1.6

25

v1.6

26

Comparing Two Standardized Infection Ratios
The SIR is a very convenient summary measure of SSI experience. You can think of it as a risk-adjusted replacement for the "overall SSI rate" or the "clean wound SSI rate," neither of which is a comparative rate.

In addition to comparing individual SIRs against their nominal value of 1.0, two different SIRs can be compared. The SIRs of two

surgeons, or two surgical teams or two hospitals can be compared. Likewise, the SIRs of a surgeon, team, or hospital over two different time periods can be compared. It is important to note

that it does not matter that the case mix, i.e., the procedures performed and the distribution of patients into risk categories, may be vastly different for the SIRs that are to be compared. In

calculating the expected number of SSIs for each SIR, proper adjustment is made for differences in case mix.

Using the data of Example 5, let's calculate and compare the SIRs of two orthopedic surgeons. Dr. X performed spinal fusions (FUSN)

and laminectomies (LAM), while Dr. Y did knee prosthesis (KPRO) and limb amputations (AMP). Eight SSIs occurred among each of the

surgeon's patients, with unadjusted overall rates of 6.7% for Dr. Y and 1.8% for Dr. X. After stratifying these patients according to

the risk index and calculating the procedure-risk SSI rates, we can v1.6 27

determine if there is a significant difference in their SSI rates. Example 5: Compare the SIRs of Two Orthopedic Surgeons Observed #SSIs 1 5 2 ___ 0X = 8 Dr. Y KPRO-0,1 KPRO-2,3 AMP-0,1,2,3 3 2 3 ___ 0Y = 8
(1)

ProcedureRisk Category Dr. X FUSN-0 FUSN-1,2,3 LAM-0,1,2,3

#Oper 50 100 300 ____ 450 70 20 30 ____ 120

Rate 2.0 5.0

(1)

NNIS Expected (1) Rate #SSIs 0.94 4.49 1.16 0.47 4.49 3.48 _______ EX = 8.44 1.04 2.65 4.48 0.73 0.53 1.34 ________ EY = 2.60

0.7 ____ 1.8 4.3 10.0 10.0 _____ 6.7

# of SSIs per 100 operations

For each surgeon, if we multiply the number of operations performed by the surgeon times the NNIS rate, we get the last column of the table in Example 5 (shown in bold in for FUSN-0), i.e., the expected number of SSIs. of SSIs for Dr. X and Summing, we see that the expected number Dr. Y is EX = 8.44 and EY 2.60,

respectively.

v1.6

28

Calculating SIRs, we get

SIR X

=

8 8.44

=

0.95

SIR Y

=

8 = 2.60

3.08

To compare the two SIRs, we can perform two tests: a Z-test, using a different formula, or a binomial test.

Z-test To compare two SIRs to each other using a Z-test, use the following formula:

Z

= 2

( SIRY - SIR X) 1 1 + EY EX

(Formula 3)

Using the data for Example 5, this yields

Z = 2

( 3.08 -

0.95)

1 1 + 8.44 2.60

=

2.02

and p-value = 0.022 (from Table 2). Once again, this Z-test provides us with an approximate p-value. v1.6 29

Binomial Test in Epi Info An exact p-value can be obtained by performing a binomial test using Epi Info or other statistical software. The name binomial

test comes from the fact that the reference distribution is a binomial distribution rather than a unit-normal distribution. perform this test, first find the larger of the two SIRs. Example 5 it is SIRY = 3.08. To In

Then the p-value of the binomial test

comparing SIRX against SIRY is given by:

p-value = Pr (U  OY), where U  Binomial (OX + OY, p) and p = EY / (EX + EY).

This

means

that with

the

reference size”

distribution equal to

is OX+OY

a and

binomial ”event

distribution

”sample

probability” equal to p = EY /(EX + EY). probability of OY or more ”events” occurring.

The p-value is the

In the EPITABLE program of Epi Info, follow the path Probability ---> Binomial and enter OY for the “numerator,” OX+OY for the “total observations,” and p*100 for the expected percentage. The p-value

is then displayed as the probability that the “number of cases” (i.e.,”events”) is greater than or equal to () OY.

v1.6

30

In Example 5, OY=8, OY+OX=16, and p*100=2.60/(2.60+8.44)*100=23.55. The binomial test yields a p-value of 0.019, in close agreement with the Z-test. The Epi Info output is shown below:

Summary In this handout we have explored how to compare risk-stratified SSI rates using a hand-calculation formula of the Z-test and Fisher’s Exact Test and Chi-square test using the Epi Info software. We

have also learned about the SIR and how to compare it to both the nominal value of 1.0 and to another SIR using hand-calculating formulas of the Z-test. Finally, we learned how to make these

comparisons using the Epi Info software: the Binomial Test for v1.6 31

comparing an SIR to 1.0 and the Poisson Test for comparing two SIRs to each other.

Limited information is included on the underlying assumptions of the statistical tests and their p-values. It is recommended that

you consult a statistician or statistics text for further details.

v1.6

32