# I. What is the difference between parametric and nonparametric

Document Sample

Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s                      38

 WHERE ARE WE? When applying inferential statistics so far this semester we have assumed random sampling error from
normally distributed samples. Sometimes the sample is not normally distributed it may be skewed or nominal or ordinal level of
measurement. In that case, special nonparametric tests must be applied.
 KEY TERMINOLOGY: Parametric & nonparametric; Friedman ANOVA by ranks; Kruskal-Wallis H test; Mann-Whitney U
test; Wilcoxon T test.
 LEARNING OBJECTIVES: (1) What is the difference between a parametric and nonparametric (a.k.a., a distribution - free
test)? (2) What criteria should you use in determining whether to use a parametric or a nonparametric test? (3) When do you
apply the chi-square test of independence, the Mann-Whitney U test, the Wilcoxon T test, the Kruskal Wallis H test, and the
Friedman ANOVA by ranks?
I.      What is the difference between parametric and nonparametric tests?
Design:                   Nominal level                           Ordinal level                      Interval/Ratio level
1. One sample            goodness of fit test                   *not covered this semester             One sample t or z test
2. 2 independent
Chi-square test of independence        Mann-Whitney U test                    Two-independent samples t test
samples
3. 2 dependent           McNemar test for significance of       Wilcoxon matched-pairs signed-
2 dependent samples t test
samples               change *not applied                    rank test
4. k independent                                                                                       One-way independent groups
Chi square test of independence        Kruskal - Wallis H test
samples                                                                                             ANOVA
5. k dependent                                                                                         One-way dependent groups
McNemar test                           Friedman ANOVA by ranks
samples                                                                                             ANOVA
6. Correlation           Coefficient of contingency             Spearman rho                           Pearson r
II.     Tests for data measured at the nominal scale of measurement.
 One sample chi-square goodness of fit test
 Two independent samples: The chi-square test of independent samples
 Two dependent samples: The McNemar test for significance of change.
III.    Tests for data measured at the ordinal level:
 Two independent samples: The Mann-Whitney U test
 Two dependent samples: The Wilcoxon Matched Pairs Signed Rank test.
 K independent samples: The Kruskal Wallis H test
 K dependent samples The Friedman ANOVA by Ranks.
of each
V.      Mann - Whitney U test computational guide:
a. computational guide and interpretation
b. SPSS data construction
c. SPSS analysis and interpretation
VI.     Wilcoxon matched pairs signed-ranks test: test:
a. computational guide and interpretation
b. SPSS data construction
c. SPSS analysis and interpretation
VII.    Kruskal Wallis H test: computational guide:
a. computational guide and interpretation
b. SPSS data construction
c. SPSS analysis and interpretation
VIII. Freidman ANOVA by ranks test:
a. computational guide and interpretation
b. SPSS data construction
c. SPSS analysis and interpretation
IX. Summary and review of nonparametric procedures.
   Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #1 of 9
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

Mann-Whitney U test: (Two independent samples)
     DESCRIPTION: Mann-Whitney U test
A nonparametric equivalent to the independent groups’t test (chapter 10). Tests whether two independent samples are from the same population. It is more powerful
than the median test since it uses the ranks of the cases. Requires an ordinal level of measurement. "U" is the number of times a value in the first group precedes a
value in the second group, when values are sorted in ascending order.
The Mann-Whitney U test tests that two sampled populations are equivalent in location. The observations from both groups are combined and ranked, with the
average rank assigned in the case of ties. The number of ties should be small relative to the total number of observations. If the populations are identical in location,
the ranks should be randomly mixed between the two samples. The number of times a score from group 1 precedes a score from group 2 and the number of times a
score from group 2 precedes a score from group 1 are calculated. The Mann-Whitney U statistic is the smaller of these two numbers.

EXAMPLE: A researcher was interested in establishing whether attendance in a preschool program affects the social maturity level of children. A random sample of 19 kindergarten
children was selected and watched closely by trained observers for one full week. The children were then rank-ordered on the basis of perceived social maturity. The children were then
divided into two groups based on whether or not they had previously attended a day-care center. In this process, the ranks are assigned to both sample distributions COMBINED rather than
ranking each distribution separately, as is done for the Spearman rs. The reason we rank the combined distributions is to find out whether one set of ranks is significantly lower than the other
set of ranks.
The income scores and the resulting ranks are as follows…

Day care                                       No Day care
X1                     R1                      X2                          R2
9                      7.5                     10                          9.5
12                     12                      12                          12
28                     18                      2                           1.5
6                      5                       10                          9.5
9                      7.5                     20                          16
4                      3.5                     22                          17
2                      1.5                     18                          15
4                      3.5                     14                          14
8                      6                       12                          12
ΣR1= 64.5
n1=9                                           n2=9
To calculate the Mann-Whitney U, the only data needed are those for R1, n1, and n2. We carry out the following steps.
Add the ranks for the first distribution (ΣR1=65). We use this value and the two sample sizes, n1 and n2, to solve for U.

n1 ( n1 + 1)                                                        9( 9 + 1)
U = n1 n2 +
2
− Σ R1
            U = (9)(9) +                       − 64.5
2
90
U = 81 +            − 64.5                                      U = 81 + 45 − 64.5                                  U = 126 − 64.5 = 61.5
2
Using the value of U and the sample sizes again, solve for zu:
n1n2                                                      (9)(9)                                                 81
U−                                                    61.5 −                                                  61.5 −
zu =                        2             z =                                        2             z =                                    2
[ n1n2 ( n1 + n2 + 1) ] / 12 u                            [ (9)(9)( 9 + 9 + 1) ] / 12    u
[ (81)(19) ] / 12
61.5 − 40.5                                                  21             21
zu =                                                  zu =                   zu =         z u = 1.854
[1539] / 12                                                128.25        11.325
 STATISTICAL INTERPRETATION:
Compare the obtained value of Zu with the z score value that excludes the extreme 1% of the distribution, z(.01) =+2.58. (For an
alpha error level of .05, compare our value with Z(.05) = +1.96) If Zu is equal to or greater than this value, reject the null
hypothesis.
Z(.01)=+2.58
Zu = 1.854, Accept Ho; p=N.S..
 PRACTICAL INTERPRETATION: Our conclusion, therefore, is to accept the null hypothesis (Ho: R1=R2) that the two sets
of ranks do represent a single population. The data from our samples show no significant difference in maturity rank between
Daycare and Non daycare children. Thus, the two sets of ranked scores do not represent different maturity populations.
 Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #2 of 9
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

*Note there are no degrees of freedom involved because we are applying the z table and using 1.96 or 2.58.

 SPSS DATA CONSTRUCTION: (create two columns of data, 1st a column to show the levels of daycare, a second
column to show maturity level).
To Define Groups for Two-Independent-Samples Tests
Daycare                Maturity level
1= Daycare "yes"       9
2= Daycare "no"        10
SPSS ANALYSIS:
Analyze  Nonparametric Test  2 Independent Samples...
    Select a grouping variable and then click "Define Groups."
    Enter the values for Group 1 and Group 2 to define the groups.

De scriptive Statistics                                                                      Ranks

N           Mean       Std. Deviation   Minimum     Maximum                        Attendence at             N        Mean Rank    Sum of Ranks
Maturity level           18      11.2222           7.0923         2.00      28.00      Maturity level   day care center?
DAY CARE -YES                  9         7.17           64.50
Attendence at                                                                                           DAY CARE - NO                  9       11.83          106.50
18       1.5000            .5145        1.00        2.00
day care center?                                                                                        Total                         18

Results
There was not a significant difference between the ranks of maturity level. Those subjects in daycare were
not significantly higher in maturity level (M=7.17) than those not in daycare (M=11.83), Zu = -1.862, p= .063
N.S.                                                                                 b                                Test Statistics

Maturity level
Mann-Whitney U                  19.500
Wilcoxon W                      64.500
Z                                -1.862
Asymp. Sig. (2-tailed)             .063
Exact Sig. [2*(1-tailed                a
.063
Sig.)]
a. Not corrected for ties.
b. Grouping Variable: Attendence at day care center?

    Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #3 of 9
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

The Wilcoxon Matched Pairs Signed-rank test: Two-Dependent Samples
      DESCRIPTION: Wilcoxon Signed-Rank Test A nonparametric procedure used with two related variables to test the hypothesis
that the two variables have the same distribution. It makes no assumptions about the shapes of the distributions of the two variables. This test
takes into account information about the magnitude of differences within pairs and gives more weight to pairs that show large differences than
to pairs that show small differences. The test statistic is based on the ranks of the absolute values of the differences between the two variables.
Example: A psychologist was interested in discovering whether differences in size of college attended are related to size of salary
earned after graduation. Two groups of graduates were selected -- one from a small college (under 1000 students) and one from
a large college (over 10,000 students). The subjects were matched according to major and GPA and their salaries after one year
When examining the two sets of salary scores, it is discovered that the distributions are badly SKEWED, since in each group there were a
few members with extremely high scores. So despite the fact that the data are originally in interval form (salary scores), because
of the skew, the ordinal Wilcoxon T test is chosen for the analysis.
• The data for 10 matched pairs of subjects are as follows…
PAIR SMALL COLLEGE                LARGE           DIFFERENCE          RANK OF DIFFERENCE                               SIGNED           RANKS WITH
GROUP, X1            COLLEGE              X1-X2                                                             RANK               LESS
GROUP, X2                                                                                                FREQUENT
SIGN
1                     111                     102            +9                    8                                   +8
2                      58                      55            +3                    3rd + 4th + 5th + 6th /4= 4.5       +4.5
3                      35                      25            +10                  9                                    +9
4                      30                      30            0                    (dropped)
5                      37                      35            +2                   1st + 2nd /2= 1.5                    +1.5
6                      22                      24            -2                   1st + 2nd /2= 1.5                    - 1.5          - 1.5
7                      35                      30            +5                    7                                   +7
8                      15                      18            -3                    3rd + 4th + 5th + 6th /4= 4.5       - 4.5          - 4.5
9                      22                      19            +3                    3rd + 4th + 5th + 6th /4= 4.5       +4.5
10                     35                      32            +3                    3rd + 4th + 5th + 6th /4= 4.5       +4.5
T=-6
 STEP ONE: Obtain the differences. We set up the difference column, X1-X2, being careful to retain the correct sign.
 STEP TWO: Rank the differences. We rank-order the absolute values of the differences. In the step, the sign of the differences is
irrelevant. Note also that four of the differences (3) are tied for second and third place. As for all conversions to ordinal ranks, we add the tied
ranks (1 +2), divide by the number of ranks tied in that position [(1+2)/2 = 1.5], and assign each the resulting average rank. Finally, whenever
there is a zero difference between a pair of scores, as in the case of Pair 4 (where each subject scored a 30), the scores for these subjects are
dropped from the analysis.
      STEP THREE: Sign the ranks: In this step, we simply affix the sign of the difference to the rank for that difference. Thus, the ranked
differences appear in a separate column but now have whichever sign appears in the preceding difference column. Thus, the difference of +9
(for Pair 1), ranked eighth, and the rank now gets a positive sign because the value of the difference column is positive. Similarly, the largest
difference, which is ranked 10th , is obtained from Pair 3 where the difference value is positive.
 STEP FOUR: Add the less frequent signed rank. Finally, we determine which sign, plus or minus, occurs less frequently
among the ranks. The negative sign occurs less often (only twice, compared to seven plus signs). Then, to obtain the value of
T, we merely add the ranks having the less frequent sign; T =6.
 STEP FIVE: Check for significance. We compare the calculated value of T with the critical table value of T with N = 9.
*See appendix Table J Critical Values of Wilcoxon's T (page#556).
Tcv.05(9) =+ 6
Tobs = -6 Accept Ho; not significant
Accept Ho

Reject Ho

*(Note that the null hypothesis would be accepted even if this had been a one-tailed unidirectional test of significance. *N stands for the
number of signed ranks; although we started with 10 pairs of scores, we lost 1 pair because of its zero difference).
*Now, unlike any other test in this course, with the Wilcoxon T test the null hypothesis is rejected only when the calculated
value of T is equal to or less than the table value. For T, smaller means more significant.

   Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #4 of 9
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s
CONCLUSION: We thus conclude that there is no significant difference between our two college size groups. The independent variables (college
size) had no effect on earned salary. The two groups, originally selected from a single population, still represent that same population.
 SPSS Wilcoxon T analysis:
 SPSS DATA CONSTRUCTION: (Create 3 columns, one to keep track of the pairs, then a 2nd for the large collage data
and a 3rd for the small college data).
Pair Large college Small college
1                  111                 102
2                   58                  55
3                   35                  25
4                   30                  30
The Two-Related-Samples Tests procedure compares the distributions of two variables.
 SPSS STATISTICAL ANALYSIS:
To Obtain Two-Related-Samples Tests
Analyze  Nonparametric Tests  2 Related Samples...
Select one or more pairs of variables, as follows:
Click each of two variables. The first variable appears in the Current Selections group as VARIABLE 1, and the
second appears as VARIABLE 2
After you have selected a pair of variables, click the arrow button to move the pair into the Test Pair(s) list. You may
select more pairs of variables. To remove a pair of variables from the analysis, select a pair in the Test Pair(s) list and click the
arrow button.

Descriptive Statistics

Std.
N         Mean       Deviation    Minimum    Maximum
LARGE           10   37.0000      25.1087        18.00    102.00
SMALL           10   40.0000      27.5318        15.00    111.00

Ranks                                        Test Statisticsb
Results
Mean       Sum of                          SMALL -             We thus conclude that there is no
N          Rank       Ranks                                               significant difference between our two
LARGE
SMALL -     Negative              a
college size groups. The independent
LARGE       Ranks
2       3.00        6.00         Z                -1.974a
variables (college size) had no effect
Positive              b                                Asymp.                            on earned salary. The two groups,
7       5.57       39.00
Ranks                                                  Sig.                 .048         originally selected from a single
Ties                 1c                                (2-tailed)                        population, still represent that same
Total               10                                                                   population. Therefore with T(.05)(9)=6,
a. Based on
a. SMALL < LARGE                                                                                  T=6, Accept Ho; not significant.
negative ranks.
b. SMALL > LARGE
c. LARGE = SMALL      Statistics for the social sciences - PY375 b.Non-parametric tests of significance - Chapter 16 - Page #5 of 9
- Wilcoxon
Signed Ranks
Test
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

The Kruskal-Wallis H test (K Independent samples)
  Description: The Kruskal-Wallis H Test: This test is the non-parametric equivalent of the one-way ANOVA. Tests
whether several independent samples are from the same population. Assumes that the underlying variable has a continuous
distribution, and requires an ordinal level of measurement.
Example: A researcher wished to test the hypothesis that male business majors earn more in later life than do either male liberal
arts or education majors. A random sample of alumni was selected from the university files from each of the three subject
major categories. To attempt to control for length of experience on the job, all subjects were selected from the same
graduating class -- the class that graduated ten years ago. All the selected alumni were contacted and asked to indicate
their yearly incomes. The men were promised that the information wold be held in strict confidence and would not be
given to the Chairman of the upcoming alumni fund drive. Because a few of the subjects reported enormously high income,
the resulting distribution became so skewed that it was decided to rank-order the incomes.
From the Kruskal-Wallis one-way analysis of variance, you might learn that the three majors do differ in their salary.

Group 1             Group 2            Group 3             R1             R2            R3
300K                 49K               40K              1.5             7              13
47K                300K               38K                8            1.5             14
58K                 52K               41K                3             6              12
44K                 56K               33K               10             4              16
55K                 28K               30K                5            18              17
46K                 35K               42K                9            15              11
ΣR1= 36.5      ΣR2=51.5        ΣR3=83
n1=6           n2=6           n3=6
*The only data values needed for this analysis are the sum of the ranks for each group and the group, or sample, sizes. We
perform the H test in the following steps:
 STEP ONE: Add the ranks in each column (group) to obtain ΣR1 =36.5, ΣR2 = 51.5, and ΣR3= 83.
    STEP TWO: Substitute the values of ΣR, N, and n into the H equation and solve…
12      ( Σ R1 ) 2 ( Σ R2 ) 2 ( Σ R3 ) 2   
H =                         +          +              − 3( N + 1)
N ( N + 1)  n1
                n2         n3       


12      ( 36.5) 2 ( 51.5) 2 ( 83) 2 
H   =                       +        +         − 3(18 + 1)
18(18 + 1)  6
               6        6   
12  (1332.25) ( 2652.25) ( 6889) 
H   =                      +           +           − 3(19)
18(19)        6            6           6 
12
H   =      ( 222.04 + 442.04 + 1148.17 ) − 3(19)
342
H   = .035(1812.25) − 57
H   = 63.41 − 57
H   = 6.41
 STEP THREE: STATISTICAL INTERPRETATION: Compare the calculated value of H with the critical value in the chi-
square table (TABLE I, p. 556). The degrees of freedom are equal to k, the number of columns (groups), minus one, that is,
(df=k-1= 3-1=2) df=2.Applying table "I" -
X2 .05(2) = 5.99
H =6.41, Reject Ho; significant at p <. 05.
 PRACTICAL INTERPRETATION: We thus reject the null hypothesis (Ho: R1= R2= R3) and conclude that the three sample
groups do indeed represent different population levels of income. It appears that some majors (notably Business) do earn more

   Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #6 of 9
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

than other majors, but the key factor may not be major. We must be careful of our interpretation of these results, a Kruskal
Wallis design would be appropriate to apply to an independent groups design.

 SPSS ANALYSIS For the Kruskal Wallis H test:

 SPSS DATA CONSTRUCTION: (create 2 columns, one for the rank and one for each of the 3 levels of "major" )
Rank          Major
7      2= Liberal Arts
13      3= Education

 SPSS DATA ANALYSIS: To Obtain Tests for Several Independent Samples
Analyze  Nonparametric Tests  K Independent Samples...
Select one or more numeric variables.
Select a grouping variable and click Define Range to specify minimum and maximum integer values for the grouping variable.

 Kruskal- Wallis H test SPSS output:
Ranks                                                Test Statisticsa,b
Major field of study       N         Mean Rank                                    RANK
RANK     Business majors                 6          6.08                Chi-Square           6.595
Liberal arts majors             6          8.58                df                       2
Education majors                6        13.83                 Asymp. Sig.           .037
Total                          18                                 a. Kruskal Wallis Test
b. Grouping Variable: Major field of study

    Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #7 of 9
Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

THE FRIEDMAN ANOVA BY RANKS (K DEPENDENT SAMPLES)
  DESCRIPTION: The Friedman ANOVA by Ranks test is the non-parametric equivalent of the repeated measures ANOVA from
chapter 15. The Friedman Test tests the null hypothesis that k related variables come from the same population. For each case, the k variables
are ranked from 1 to k. The test statistic is based on these ranks.
SIMILAR TESTS: The Friedman test is the non-parametric equivalent of a one-sample repeated measures design or a two-way analysis of
variance with one observation per cell. Friedman tests the null hypothesis that k related variables come from the same population. For each
case, the k variables are ranked from 1 to k. The test statistic is based on these ranks. Kendall’s W is a normalization of the Friedman statistic.
Kendall’s W is interpretable as the coefficient of concordance, which is a measure of agreement among raters. Each case is a judge or rater
and each variable is an item or person being judged. For each variable, the sum of ranks is computed. Kendall’s W ranges between 0 (no
agreement) and 1 (complete agreement). Cochran’s Q is identical to the Friedman test but is applicable when all responses are binary. It is an
extension of the McNemar test to the k-sample situation. Cochran’s Q tests the hypothesis that several related dichotomous variables have the
same mean. The variables are measured on the same individual or on matched individuals.
Data. Use numeric variables that can be ordered. Preferably ordinal level data.
Assumptions. Nonparametric tests do not require assumptions about the shape of the underlying distribution. Use dependent,
random samples.
Related procedures. If the variances of all of your variables are equal and their covariances are 0, and you have at least interval and
or ratio level data, use the Repeated Measures ANOVA procedure, available in ABSTAT Program.
*Note: To use the chi square table to check significance, you must have at least 10 scores per column when there are three
columns of ranked scores, and at least 5 scores per column when there are four column of ranked scores.
Example: Does the public associate different amounts of prestige with a doctor, a lawyer, a police officer, and a teacher? Five
people are asked to rank these four occupations in order of prestige. Friedman’s test indicates that the public does in fact
associate different amounts of prestige with these four professions.
    STEP 1:In each row, rank-order the scores from high = 1 to low = 4.
    STEP 2: Sum the columns of ranked scores

Subject:         Doctor       Lawyer        Police officer      Teacher
1                  14             1               6               16
2                  12             2               7               17
3                  18             8              13                3
4                  19             9               4               11
5                  15             5              10               20
*The combined ranks for each group are shown below
Subject:     Doctor       Lawyer        Police officer      Teacher
1            2             4               3                1
2            2             4               3                1
3            1             3               2                4
4            1             3               4                2
5            2             4               3                1
SUM:       ΣR1=8        ΣR2=18          ΣR3=15            ΣR4=9

    STEP 3: Insert the values into the equation below.

χ   2
=
12
NK ( K + 1)
[                                            ]
( Σ R1 ) 2 + ( Σ R2 ) 2 + ( Σ R3 ) 2 + ( Σ R4 ) 2 − 3N ( K + 1)

χ   2
=
12
(5)(4)( 4 + 1)
[                                 ]
( 8) 2 + (18) 2 + (15) 2 + ( 9) 2 − (3)(5)( 4 + 1)
12
χ   2
=          [ ( 64) + ( 324) + ( 225) + ( 81) ] − (15)( 5)
(20)( 5)
12
χ   2
=        (694) − (75)  χ 2 = (0.12)(694) − (75)                                     χ   2
= 83.28 − 75 = 8.28
(100)
   Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #8 of 9
Reject Ho; p<.05

Statistics for the Social Sciences
C h a p t e r 1 6 : Fo u r N o n - p a r a m e t r i c S t a t i s t i c a l Te s t s

   STEP 4: Check the calculated value against the tabled value (df=k-1=3). If the obtained value is equal to or greater
than the critical value, we reject the null hypothesis.(chi-square table I)
X2 .05(3) =7.82
7.82             8.28

Xr2 = 8.28

STEP FIVE: Statistical Interpretation: The calculated value of 8.28 exceeds the critical value of 7.82, therefore we

reject the null hypothesis and accept the alternative hypothesis (R1≠ R2 ≠ R3 ≠ R4) and conclude that there is a
significant difference between the prestige rankings of the four occupations.
 SPSS data construction:
Subject       Doctor Lawyer Police             Teacher
1             2         4          3           1
2             2         4          3           1
 SPSS DATA ANALYSIS:
To Obtain Tests for Several Related Samples

Test Statisticsa                  Ranks

N                        5                     Mean
Chi-Square           8.280                     Rank
df                       3        DOCTOR         3.40
Asymp. Sig.           .041        LAWYER         1.40
a. Friedman Test               Police
2.00
Officer
TEACHER         3.20

Select two or more numeric test variables.
Analyze  Nonparametric Tests  K Related
samples

Results
The Friedman ANOVA by ranks procedure revealed a significant difference among the
prestige ratings between four different occupations of doctor M=3.40, Lawyer M=1.40,
Police officer M=2, and Teacher M=3.20, Xr2 (N=5, 3) = 8.20, p=.041.

    Statistics for the social sciences - PY375 - Non-parametric tests of significance - Chapter 16 - Page #9 of 9

DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 590 posted: 10/31/2008 language: English pages: 9
How are you planning on using Docstoc?