Document Sample

Combining screening tests in series or parallel Parallel testing Two screening tests, whether identical or different, are said to be applied in parallel if a positive result on either test is sufficient to prompt a diagnostic work-up (i.e., the combined result is called "positive"). Series testing Two screening tests are said to be applied in series if both tests must be positive in order to prompt action (the combined result is called "positive"). For example, breast cancer screening frequently employs a combination of mammography and breast physical exam applied in parallel. If either test is positive, then further investigation is indicated. In contrast, HIV screening generally employs a combination of ELISA and Western Blot tests applied in series. If the ELISA test is repeatedly positive (two ELISA tests applied in series) then a Western Blot test is given before making a determination that HIV antibody is present (I.e., series testing of ELISA and Western Blot). Similarly, syphilis testing employs two tests in series. Specimens that test positive with an RPR (rapid plasma reagin) or VDRL (Venereal Disease Research Laboratory test) are evaluated with a confirmatory FTA-ABS or MHA-TP. The overall sensitivity (sometimes called "net sensitivity") and overall specificity (sometimes called "net specificity") for the two tests in combination can be obtained using probability concepts. If the two tests are labelled A and B, there are four possible results if both are given: both may give the correct result, both may give an incorrect result, or one test may give the correct result and the other an incorrect result. These four possibilities are shown in the diagram below. (1) A correct, B incorrect (2) Both incorrect /\ | | Sensitivity or specificity of B | | | \/ (3) Both A and B correct (4) A incorrect B correct <---------------- ------ Sensitivity or -------- -------------> specificity of A Correct classification of cases - combining sensitivities Sensitivity evaluates the ability to identify cases. If the diagram shows test results for cases, then the probability of a correct test is the sensitivity of the test. If A and B are applied in series, then only the cases that are correctly classified by both tests (represented by box 3) will be termed "positive" for the combined classification. In fact, if one test is negative, the second test may not even be done, which we are counting here as "incorrect", since the diagram represents cases. So the overall sensitivity of applying tests A and B in series is represented by the area of box 3. Algebraically: Combined sensitivity for A and B in series = Sensitivity of A x Sensitivity of B Since we are focusing on cases, each sensitivity is the probability of a correct test result. (The joint probability of two independent events is the product of the probabilities of each event). If, instead, tests A and B are applied in parallel, so that a positive result on either test causes the overall result to be classified as positive, then we have two chances to identify each case. So the sensitivity for the combination is represented by the total area of boxes 1, 3, and 4. This area can be obtained algebraically as: A correct (1+3) + B correct (3+4) - Both A and B correct (3) When we add the area where A is correct (boxes 1 and 3) to the area in which B is correct (boxes 3 and 4), we are counting box 3 twice, so we need to subtract it to avoid double-counting. We can write the: Combined sensitivity for A and B in parallel = Sensitivity of A + Sensitivity of B - (Sensitivity of A x Sensitivity of B) where each sensitivity is the probability of a correct test (since these are. cases). So series testing decreases sensitivity, and parallel testing increases sensitivity. Here is a calculator to see these relations with numbers. Change the values of the sensitivities in the shaded cells to see the sensitivity of the two tests in combination. Sensitivity of test A Sensitivity of test B A & B combined in series A & B combined in parallel 90% 80% 72% 98% A B AxB A+B-AxB Correct classification of Non-cases - combining specificities Specificity evaluates the ability to identify non-cases. If the diagram shows non-cases, then the probability of a correct test is the specificity of the test. If A and B are applied in parallel, then only the noncases that are correctly classified by both tests (represented by box 3) will be termed "non-cases" in the combined classification. If the first test is positive (i.e., incorrect), the second test may not even be done, since a single positive test is sufficient to call the the overall classification positive for parallel testing. Since we are discussing non-cases, the positive classification is incorrect. In order to have a correct classification of non-cases with two tests read in parallel, both tests must be negative. So the overall probability of correct classification of non-cases (the overall specificity) from applying tests A and B is in parallel is represented by the area in box 3. Algebraically: Combined specificity for A and B in parallel = Specificity of A x Specificity of B where, since we are focusing on non-cases, each specificity is the probability of a correct test result. (The joint probability of two independent events is the product of the probabilities of each event). If instead tests A and B are applied in series, so that a negative (correct) result on either test causes the overall result to be classified as negative (correct), then we have two chances to identify each non-case. Thus, the specificity of the combination is represented by the total area of boxes 1, 3, and 4. This area can be obtained algebraically as: A correct (1+3) + B correct (3+4) - Both A and B correct (3) When we add the area where A is correct (boxes 1 and 3) to the area in which B is correct (boxes 3 and 4), we are counting box 3 twice, so we need to subtract it to avoid double-counting. So the: Combined specificity for A and B in series = Specificity of A + Specificity of B - (Specificity of A x Specificity of B) where each specificity is the probability of a correct test. So series testing increases specificity, and parallel testing decreases specificity. Change the values of the specificities in the shaded cells to see the specificity of the two tests in combination. Specificity of test A Specificity of test B A & B combined in series A & B combined in parallel 80% 90% 98% 72% A B A+B-AxB AxB Summary Parallel testing with two tests gives us two chances to identify each case. So parallel testing has higher sensitivity. Series testing with two tests gives us two chances to identify each non-case. So series testing has higher specificity. Sensitivity Series SensA x SensB Specificity SpecA + SpecB - SpecA x SpecB Parallel SensA + SensB - SensA x SensB SpecA x SpecB www.epidemiolog.net V. Schoenbach, 9/21/2005 Predictive value Prevalence and specificity are the main determinants of positive predictive value. An easy way to see this algebraically is the following. Cases who test positive (true positives) Positive predictive value (PPV) = ---------------------------------------------------------------All positive tests All positive tests = Cases who test positive (true postives) + Noncases who test positive (false positives) Cases who test positive = Sensitivity x prevalence Noncases who test positive = (1-specificity)(1-prevalence) So: Sensitivity x prevalence PPV = -----------------------------------------------------------------------------------------Sensitivity x prevalence + (1-specificity)(1-prevalence) In the usual screening situation, the disease is rare, say less than 1%. In that case, (1-prevalence) is close to 1, and Sensitivity x prevalence will be less than the prevalence (or equal to prevalence, if sensitivity=100%). So positive predictive value will be approximately: PPV = A small # less than the prevalence ---------------------------------------------------------------------------A small # less than the prevalence + (1-specificity) 1 - specificity is the false positive rate, I.e., the proportion of non-cases who test positive, so positive predictive value is approximately: PPV = A small # less than the prevalence ---------------------------------------------------------------------------------A small # less than the prevalence + false positive rate So if the false positive rate is larger than the prevalence (not unusual for a rare disease), the positive predictive value will necessarily be less than 50%, even with perfect sensitivity. Try out this: Predictive value calculator Population size Disease prevalence Sensitivity Specificity False positive rate 10,000 0.200 90.0% 90.0% 10.0% (Change these numbers and see how the predictive value below changes) (Note: these cells are named, permitting the formulas below to use named cell references.) Test result Positive Negative Total True Status Cases Noncases 1,800 800 200 7,200 2,000 8,000 Total Predictive value Observed prevalence 2,600 69.2% 0.260 7,400 97.3% (All positive tests / population) 10,000 www.epidemiolog.net Victor J. Schoenbach, 9/10/2000, 9/17/2004 Observed prevalence (compare to cell D52) (All positive tests / population)

DOCUMENT INFO

Shared By:

Categories:

Stats:

views: | 184 |

posted: | 11/28/2009 |

language: | English |

pages: | 8 |

Description:
Predictive-value-and-combining-screening-tests

OTHER DOCS BY akgame

Docstoc is the premier online destination to start and grow small businesses. It hosts the best quality and widest selection of professional documents (over 20 million) and resources including expert videos, articles and productivity tools to make every small business better.

Search or Browse for any specific document or resource you need for your business. Or explore our curated resources for Starting a Business, Growing a Business or for Professional Development.

Feel free to Contact Us with any questions you might have.