# EDRS 811

Document Sample

```					Barbara Gruber

EDRS 811
Fall 2008
Homework Assignment 4
This assignment is due at the start of class on December 1, 2008. Late work will receive a 5%
deduction unless other arrangements have been made in advance. Assignment should be
completed neatly by hand or with a word processor. Be sure to show all work and provide an
work.
******************************************************************************

Part A: Chi-Square (17 points Total)

NOTE: Data for Part A questions are contained in the SPSS spreadsheet “HW 4 Part A 1 and
2”.

Professor Plum is conducting a study with three groups each receiving a different treatment. At
the start of his study he has 270 participants: 80, 90, and 100 participants for treatments A, B,
and C, respectively. During the treatment phase there is some experimental mortality; that is,
some participants drop out of the study. Specifically, 30 participants drop out – 3 from A, 15
from B, and 12 from C. Professor Plum wishes to use a chi-square test to see if participants’
experimental mortality might be related to the different treatments. (Note: This is a goodness-of-
fit test because he wants to see how well the numbers dropping out of each group fit numbers we
would expect to drop out.)

1. a. Usually when participants drop out of different treatment groups, we hope that they do
so in proportion to the original sample sizes. To the extent that they don’t, we become
concerned that something about one or more of our treatments is “causing” them to drop
out disproportionately. Based on the information above, what proportion of participants
would we expect to drop out if they dropped out in proportion to the original sample
sizes? (2pts)
30/270 x 100 = 11%

b. Based on the proportions calculated in 1a., how many of the 30 participants would we
expect to drop out of each treatment, given that they do so in proportion to the original
sample sizes? In other words, find the expected numbers of dropouts for each group.
(They may not be nice round numbers. That’s OK—Round to one decimal place.) (1 pt)

Expected (11%)                 Observed

Group A                        80 x .11 = 8.8                 3

Group B                        90 x .11 = 9.9                 15

Group C                        100 x .11 = 11                 12
Barbara Gruber

c. If you were to compute the chi-square test by hand, how many degrees of freedom does
the chi-square statistic have? (1 pt)
df = k -1 = 3-1 = 2

d. If you were to compute the chi-square by hand, from your chi-square table, what is the
appropriate critical value for a test at the .05 level? (Remember, this is a one-tailed test.)
(1 pt)
critical value X2 = 5.99

e. Using the expected values you identified in 1b., conduct the chi-square goodness of fit
test using SPSS. (You will want to use the Q1_Dropout variable in the “HW 4 Part A 1
and 2” data spreadsheet.) Verify that your degrees of freedom and expected values from
1b. and 1c. match the SPSS output. (2 pts)

Q1_Dropouts

Observed N       Expected N          Residual

Group A                     3            9.0              -6.0

Group B                 15              10.0              5.0

Group C                 12              11.0              1.0

Total                   30

Test Statistics

Q1_Dropouts
a
Chi-Square                    6.591

df                                   2

Asymp. Sig.                    .037

a. 0 cells (.0%) have expected
frequencies less than 5. The
minimum expected cell
frequency is 9.0.
f. Are the drop-out numbers significantly different from those expected (at the .05 level)?

The calculated chi-square value is 6.591 which exceeds the critical value of 5.99.
Therefore we reject the null and recognize that the drop out numbers are significantly
different from those expected and these results are statistically significant (p = .037). For
Groups A & B the residual score is greater than 2(Group A = -6, Group B = 5) therefore
they have a significant impact on the X2 value.
Barbara Gruber

******************************************************************************

Dr. Howser wants to examine the relationship between gender and brand preference of a product
to see whether or not they appear to be independent. The following contingency table contains
the observed frequencies of brand preferences by gender. Conduct a chi-square test at the .01
level to see if brand preference and gender are associated, by the following steps below.

Brand A Brand B          Brand C

Male           11          25          35         71

Female         15           7          7          29

26          32          42

2. a. Determine the expected cell sizes based on the marginals (row totals and column
totals). (They may not be nice round numbers. That’s OK—round them to one decimal
place) (2 pts)

Brand A                 Brand B                 Brand C

Male                                                                                 71

Observed              11                      25                      35
Expected     (26)(71)/100 = 18.5      (32)(71)/100 =22.7      (42)(71)/100 = 29.8

Female                                                                               29

Observed              15                      7                        7
Expected      (26)(29)/100 = 7.5      (32)(29)/100 = 9.3      (42)(29)/100 = 12.2

26                       32                     42

b. If you were to conduct this analysis by hand, how many degrees of freedom does the
chi-square statistic have? (1 pt)
df = (r-1)(c-1) = (2-1)(3-1) = 2

c. If you were to conduct this analysis by hand, from your chi-square table, what is the
appropriate critical value for a test at the .01 level? (Remember, this is a one-tailed test.)
(1 pt)
critical value X2 = 9.21

d. Use SPSS to conduct a chi-square test of association. (You will want to use the
Q2_Gender and Q2_Brand variables in the “HW 4 Part A 1 and 2” data spreadsheet.)
Barbara Gruber

Verify that your degrees of freedom and expected values from 2a. and 2b. match the
SPSS output. (2 pts)

Case Processing Summary

Cases

Valid                         Missing                               Total

N           Percent          N                 Percent            N           Percent

Q2_Gender * Q2_Brand              100      100.0%                    0               .0%              100       100.0%

Q2_Gender * Q2_Brand Crosstabulation

Q2_Brand

A                  B                 C            Total

Q2_Gender     F        Count                            15                   7                7              29

Expected Count                   7.5                9.3              12.2            29.0

Std. Residual                    2.7                 -.7             -1.5

M        Count                            11                  25               35              71

Expected Count                  18.5               22.7              29.8            71.0

Std. Residual                   -1.7                  .5               .9

Total    Count                            26                  32               42             100

Expected Count                  26.0               32.0              42.0         100.0

Chi-Square Tests

Asymp. Sig. (2-
Value              df                    sided)
a
Pearson Chi-Square            14.287                      2                    .001

Likelihood Ratio               13.537                     2                    .001

N of Valid Cases                     100

a. 0 cells (.0%) have expected count less than 5. The minimum
expected count is 7.54.
Barbara Gruber

e. Based on these data, does it appear that gender and brand preference are independent
The calculated X2 = 14.287 and exceeds the critical value of X2 = 9.21 which means we
reject the null that there is no association (independence). These results are statistically
significant p=.001. We can then say that there is strong statistical evidence of an
association between gender and brand preference. They are dependent, with the highest
impact to the significance of the association occurring between females and Brand A.

*****************************************************************************
Part B: ANOVA (20 points) and ANCOVA (TBD)
An experiment is conducted comparing four instructional methods to teach children to perform a
particular task. The experimenter divides participants into four groups of 10. Each group of
children is assigned a method of instruction. For each child an IQ score is measured prior to
group assignment and prior to treatment; after treatment a score on the desired task is observed.
(NOTE: All data are contained in “HW 4 Part B 3 and 4”).

Treatment B                    Treatment C                     Treatment D
Treatment A
IQ         Score               IQ         Score                IQ         Score
IQ         Score
80.00      38.00               92.00      55.00                94.00      37.00
94.00      14.00
84.00      34.00               96.00      53.00                94.00      24.00
96.00      19.00
90.00      43.00               99.00      55.00                98.00      22.00
98.00      17.00
97.00      43.00              101.00      52.00               100.00      43.00
100.00      38.00
97.00      61.00              102.00      35.00               103.00      49.00
102.00      40.00
112.00      63.00              104.00      46.00               104.00      41.00
105.00      26.00
115.00      93.00              107.00      57.00               108.00      26.00
109.00      41.00
118.00      74.00              110.00      55.00               113.00      70.00
110.00      28.00
120.00      76.00              111.00      42.00               115.00      63.00
111.00      36.00
120.00      79.00              118.00      81.00               104.00      24.00
130.00      66.00

3. a. Write the null hypothesis to test for mean differences among the groups using the appropriate symbols.
Explain, in words, what this means about the population levels of performance on the task. (2 pts)
Ho: 1= 2 = 3 = 4
There is no difference between the population means across the 4 treatment groups (i.e. the means for
each of the treatments is the same)

b. Determine by hand how many degrees of freedom are associated with the omnibus F-test? (1 pt)

dfB = K-1 = 4-1 = 3

dfW =n1 + n2 + n3 + n4 – K = 10+10+10+10-4 = 36           or K(n-1) =4(10-1) = 4(9) = 36

c. From the appropriate table, determine the critical F for a one-tailed test at the .05 level with the
appropriate degrees of freedom. (1 pt)

F(3,36) = 2.87
Barbara Gruber
Use SPSS to conduct an Analysis of Variance (ANOVA). Request information on the homogeneity of variance,
effect size, and post hoc comparisons using Tukey and Dunnett’s test (for Dunnet use Treatment A as the
comparison group in a two-tailed test). (See pages 230 and 226 for specific steps).

d. What can you conclude about the homogeneity of variance assumption? Paste relevant output here. (2pts).

The calculated F(3,36) = 1.679 does not exceed the critical value F(3,36) = 2.87 therefore we retain the null
that there is no difference between the variances across groups (i.e. homogeneity of variance) The p =.189
also supports this since it is not statistically significant (.05<)
a
Levene's Test of Equality of Error Variances

Dependent Variable:Score

F                df1         df2          Sig.

1.679              3           36          .189

Tests the null hypothesis that the error variance
of the dependent variable is equal across groups.

a. Design: Intercept + Treatment

e. Based on the omnibus F-test, evaluate your null hypothesis and indicate the values used to make your
decision. Verify that the degrees of freedom you calculated in question 3b. match your output. Paste
ANOVA Summary Table here. (2 pts)
The calculated F(3,36) = 5.931 exceeds the critical value F(3,36) = 2.87 therefore we reject the null that
there is no difference in means across the groups. These results tell us that two or more groups are not equal
and these results are statistically significant (p =.002).
Tests of Between-Subjects Effects

Dependent Variable:Score

Type III Sum of                                                               Partial Eta
Source                   Squares                df         Mean Square         F          Sig.      Squared
a
Corrected Model            4763.275                   3         1587.758        5.931        .002             .331

Intercept                  86397.025                  1        86397.025      322.755        .000             .900

Treatment                   4763.275                  3         1587.758       5.931        .002              .331

Error                       9636.700                 36          267.686

Total                     100797.000                 40

Corrected Total            14399.975                 39

a. R Squared = .331 (Adjusted R Squared = .275)

f. How strong is the relationship between treatment and score? To address this question, compute the
proportion of variance in individuals’ scores accounted for by Treatment. That is, calculate eta-squared and
omega squared by hand. Verify that eta-squared matches your SPSS output. (3 pts)

ή2= SSB/SST = 4763.275/14399.975 = .331

SS B  ( K  1) MSW      = 4763.275 – (4-1)(267.686) = 4763.275 – 803.058 = 3960.217 = .270
2 
SST  MSW
14399.975 + 267.686                             14667.661            14667.661
Barbara Gruber

g. Using Tukey’s Test for multiple comparisons, which treatment groups are statistically significantly
different. Report the specific p-values for each comparison. Paste relevant SPSS output here. (2 pts)

Groups A and B are statistically significantly different p = .003

Groups A and C are statistically significantly different p = .038

Groups B and D are statistically significantly different p = .039

Multiple Comparisons

Dependent Variable:Score

(I)      (J)                                                        95% Confidence Interval
Treatm Treatm      Mean Difference
ent      ent              (I-J)           Std. Error   Sig.       Lower Bound     Upper Bound
*
Tukey HSD                 A        B                 -27.9000          7.31691      .003         -47.6061         -8.1939
*
C                 -20.6000          7.31691      .038         -40.3061          -.8939

D                      -7.4000      7.31691      .744         -27.1061        12.3061
*
B        A                  27.9000          7.31691      .003          8.1939         47.6061

C                       7.3000      7.31691      .752         -12.4061        27.0061
*
D                  20.5000          7.31691      .039           .7939         40.2061
*
C        A                  20.6000          7.31691      .038           .8939         40.3061

B                      -7.3000      7.31691      .752         -27.0061        12.4061

D                      13.2000      7.31691      .288          -6.5061        32.9061

D        A                       7.4000      7.31691      .744         -12.3061        27.1061
*
B                 -20.5000          7.31691      .039         -40.2061          -.7939

C                  -13.2000         7.31691      .288         -32.9061          6.5061
a                                         *
Dunnett t (2-sided)       B        A                  27.9000          7.31691      .001          9.9579         45.8421
*
C        A                  20.6000          7.31691      .021          2.6579         38.5421

D        A                       7.4000      7.31691      .621         -10.5421        25.3421

Based on observed means.
The error term is Mean Square(Error) = 267.686.

*. The mean difference is significant at the .05 level.

a. Dunnett t-tests treat one group as a control, and compare all other groups against it.
Barbara Gruber
h. Conduct all possible independent t-tests for the four treatment groups and use a Bonferroni adjustment to
determine group differences. Which treatment groups are statistically significantly different? Report the
specific p-values for each comparison. Paste relevant SPSS output here. (3 pts)
α = α*/c = .265/6 = .044
c = [k(k-1)]/2 = 4(4-1)/2 = 12/2 = 6
α* = 1-(1- α)c = 1 – (1-.05)6 = 1 – (.95)6 = 1-.735 = .265

Groups A and B are statistically significantly different p = .003 <.044
Independent Samples Test

Levene's Test for
Equality of
Variances                                              t-test for Equality of Means

95% Confidence Interval of

Sig. (2-     Mean        Std. Error            the Difference

F        Sig.          t         df        tailed)    Difference    Difference         Lower         Upper

A_and_B_scores Equal variances                                 -
1.414         .250                    18      .003     -27.90000        8.00618        -44.72036    -11.07964
assumed                                   3.485

Equal variances                                 -
16.820         .003     -27.90000        8.00618        -44.80533    -10.99467
not assumed                               3.485

Groups A and C are statistically significantly different p = .004 <.044

Independent Samples Test

Levene's Test for
Equality of
Variances                                              t-test for Equality of Means

95% Confidence Interval

Sig. (2-     Mean         Std. Error          of the Difference

F         Sig.         t         df         tailed)   Difference    Difference          Lower        Upper

A_and_C_scores Equal variances                                 -
1.061        .317                    18       .004    -20.60000         6.17108        -33.56496    -7.63504
assumed                                   3.338

Equal variances                                 -
17.040          .004    -20.60000         6.17108        -33.61752    -7.58248
not assumed                               3.338

Groups A and D are NOT statistically significantly different p = .318 >.044

Independent Samples Test

Levene's Test for
Equality of
Variances                                              t-test for Equality of Means
Barbara Gruber
95% Confidence Interval

Sig. (2-     Mean        Std. Error           of the Difference

F           Sig.         t         df         tailed)   Difference    Difference          Lower         Upper

A_and_D_scores Equal variances                                  -
.149           .704                    18       .318     -7.40000        7.21218         -22.55223     7.75223
assumed                                    1.026

Equal variances                                  -
17.842          .319     -7.40000        7.21218         -22.56185     7.76185
not assumed                                1.026

Groups B and C are NOT statistically significantly different p = .338 >.044

Independent Samples Test

Levene's Test for
Equality of
Variances                                            t-test for Equality of Means

95% Confidence Interval

Sig. (2-     Mean        Std. Error           of the Difference

F           Sig.        t        df         tailed)    Difference    Difference         Lower         Upper

B_and_C_scores Equal variances
4.790       .042 .984              18        .338      7.30000        7.42017         -8.28919     22.88919
assumed

Equal variances
.984 14.715               .341      7.30000        7.42017         -8.54248     23.14248
not assumed

Groups B and D are statistically significantly different p = .024 <.044

Independent Samples Test

Levene's Test for
Equality of
Variances                                               t-test for Equality of Means

95% Confidence Interval

Sig. (2-     Mean        Std. Error           of the Difference

F           Sig.         t         df         tailed)   Difference    Difference         Lower         Upper

B_and_D_scores Equal variances
.640           .434 2.468              18       .024     20.50000        8.30616         3.04941      37.95059
assumed

Equal variances
2.468 17.464              .024     20.50000        8.30616         3.01097      37.98903
not assumed
Barbara Gruber
Groups C and D are NOT statistically significantly different p = .059 >.044

Independent Samples Test

Levene's Test for
Equality of
Variances                                            t-test for Equality of Means

95% Confidence Interval

Sig. (2-     Mean        Std. Error          of the Difference

F           Sig.           t     df        tailed)    Difference   Difference         Lower        Upper

C_and_D_scores Equal variances
1.971        .177 2.014              18      .059     13.20000         6.55557        -.57275      26.97275
assumed

Equal variances
2.014 16.288          .061     13.20000         6.55557        -.67726      27.07726
not assumed

i. Based on the results of Dunnett’s Method for planned comparisons, which treatment groups are
statistically significantly different from Treatment A? Report specific p-values for each comparison. (Note:
Relevant SPSS output this question should have been included in 3g.). (2 pts)
Groups B and A are statistically significantly different p = .001
Groups C and A are statistically significantly different p = .021

Multiple Comparisons

Score
Dunnett t (2-sided)

(I)      (J)                                                                   95% Confidence Interval
Treatm Treatm      Mean Difference
ent      ent              (I-J)               Std. Error         Sig.       Lower Bound       Upper Bound
*
B        A                  27.9000             7.31691              .001            9.9579            45.8421
*
C        A                  20.6000             7.31691              .021            2.6579            38.5421

D        A                     7.4000           7.31691              .621         -10.5421             25.3421

Based on observed means.
The error term is Mean Square(Error) = 267.686.

*. The mean difference is significant at the 0.05 level.
Barbara Gruber
j. Compute the effect sizes of all pairwise comparisons. What can you conclude about the differences among
the treatment groups? (2 pts)                              | Yi  Y j |
d ij 
dAB = |-27.900| = 27.900 = 1.70                                MSW
√267.686 16.361

dAC = |-20.600| = 20.600 = 1.26
√267.686 16.361

dAD = |-7.400| = 7.400 = .45
√267.686 16.361

dBC = |7.300|  = 7.300 = .45
√267.686 16.361

dBD = |20.500| = 20.500 = 1.25
√267.686 16.361

dCD = |13.200| = 13.200 = .81
√267.686 16.361

Effect size among groups translates into standardized differences between the groups. There is a medium
effect size (d<.50) for comparisons between groups A&D and B&C and a large effect size (d≥ .80) for
comparisons between groups A&B, A&C, B&D, and C&D which translates into large differences among
the treatment groups.

4. a. Write the null hypothesis to test for mean differences among the groups using IQ as a covariate (i.e.,
controlling for IQ) using the appropriate symbols. Explain, in words, what this means about the population
levels of performance on the task. (2 pts)

Ho: µ1*= µ2*= µ3*= µ4*

There are no statistically significant differences among the treatment groups on the test score while
controlling for group differences with IQ.

b. Determine by hand how many degrees of freedom are associated with the omnibus F-test controlling for
IQ? (1 pt)
dfB = K – 1 = 4-1 =3
dfW *= (N – K) – 1 = (40-4) – 1 = 36-1 = 35

c. From the appropriate table, determine the critical F for a one-tailed test at the .05 level, controlling for IQ,
with the appropriate degrees of freedom. (1 pt)

F(3,35) = 2.87

Use SPSS to conduct an Analysis of Covariance (ANCOVA) with IQ as a covariate to determine if there are
group differences at the .05 level (i.e., α = .05) by completing the following:

d. Indicate why IQ is an appropriate covariate to use to examine group differences in Score based on the
Instructional Method received. Consider the relation between the covariate and the dependent variable as
well as the connection (or lack thereof) between the grouping variable and the covariate. (2 pts)
Barbara Gruber
When selecting a covariate (IQ) it should be something that has a linear relationship with the dependent
variable (test score). You can assume that students with high IQs will generally score higher on test scores
regardless of any intervention therefore they are meaningfully related to test scores. Another requirement is
that the IQ (covariate) is not used to determine the groups. Since students were placed in groups randomly
there is no connection between the grouping variable (instructional method) and the covariate (IQ)

A Normal probability graph plotting IQ and score shows a positive linear relationship and calculation of a
correlation coefficient also shows a positive relationship between the two variables.

Correlations

IQ           Score
**
IQ        Pearson Correlation               1.000            .596

Sig. (2-tailed)                                      .000

N                                40.000               40
**
Score     Pearson Correlation               .596             1.000

Sig. (2-tailed)                       .000

N                                      40         40.000

**. Correlation is significant at the 0.01 level (2-tailed).

e. Check the homogeneity of slopes assumption using SPSS (see page 270 for specific steps). Paste the
relevant output here. Based on your output, what does this tell us about our assumption of parallel slopes?
Explain and indicate values used to make your decision in the appropriate format. (3 pts)

F(3,32) = .716, p= .550

The test F value (.716) does not exceed the critical F value (2.87) and the p value for the test value is not
statistically significant (p>.05) therefore we retain the null and assume that there is homogeneity of
regression slopes and that the treatment and covariate (IQ) do not interact.

Tests of Between-Subjects Effects

Dependent Variable:Score

Type III Sum of                                                              Partial Eta
Source                     Squares                df            Mean Square    F          Sig.      Squared
a
Corrected Model             10763.841                      7        1537.692   13.533        .000             .747

Intercept                      1523.771                    1        1523.771   13.410        .001             .295

Treatment                         389.734                  3         129.911    1.143        .347             .097

IQ                             4003.781                    1        4003.781   35.236        .000             .524

Treatment * IQ                    243.982                  3          81.327       .716      .550             .063

Error                          3636.134                    32        113.629

Total                       100797.000                     40

Corrected Total              14399.975                     39
Barbara Gruber
Tests of Between-Subjects Effects

Dependent Variable:Score

Type III Sum of                                                         Partial Eta
Source                 Squares             df        Mean Square      F           Sig.      Squared
a
Corrected Model         10763.841               7        1537.692     13.533         .000             .747

Intercept                   1523.771            1        1523.771     13.410         .001             .295

Treatment                    389.734            3         129.911         1.143      .347             .097

IQ                          4003.781            1        4003.781     35.236         .000             .524

Treatment * IQ               243.982            3          81.327          .716      .550             .063

Error                       3636.134            32        113.629

Total                   100797.000              40

a. R Squared = .747 (Adjusted R Squared = .692)

f. Use SPSS to conduct the ANCOVA analysis and request information on the homogeneity of variance,
effect size, power, and adjusted group means (see page 271 for specific steps). Based on the omnibus F-test,
evaluate your null hypothesis and indicate the values used to make your decision. Verify that the degrees of
freedom you calculated in question 4b. match your output. Paste ANCOVA Summary Table here. (2 pts)

The test F value (16.243) exceeds the critical F value (2.87) and the p value for the test value (p=.000) is
statistically significant (p>.05) therefore we reject the null and assume that there are statistically significant
differences among the treatment groups on the test score while controlling for group differences with IQ.

Tests of Between-Subjects Effects

Dependent Variable:Score

Type III Sum of                                                         Partial Eta
Source                 Squares             df        Mean Square      F           Sig.      Squared
a
Corrected Model         10519.860               4        2629.965     23.723         .000             .731

Intercept                   2198.261            1        2198.261     19.829         .000             .362

IQ                          5756.585            1        5756.585     51.926         .000             .597

Treatment                   5402.045            3        1800.682     16.243         .000             .582

Error                       3880.115            35        110.860

Total                   100797.000              40

Corrected Total            14399.975            39

a. R Squared = .731 (Adjusted R Squared = .700)

g. Based on the SPSS output, how strong is the relationship between treatment and score (i.e., what
proportion of variance in individuals’ scores accounted for by Treatment)? (1 pts)

ή2 = .582 which means that 58.2% of the variance in individual scores can be accounted for by the treatment
group.
Barbara Gruber
h. Use the appropriate syntax in SPSS (see page 273 for specific steps) to conduct post hoc comparisons on
the adjusted group means. Indicate which treatment groups are statistically significantly different based on
the presented results. Report the specific p-values for each comparison. Paste syntax and relevant SPSS
output here. (2 pts)

UNIANOVA
Score BY Treatment WITH IQ
/METHOD = SSTYPE(3)
/lmatrix 'A vs B'
Treatment 1 -1 0 0
/lmatrix 'A vs C'
Treatment 1 0 -1 0
/lmatrix 'A vs D'
Treatment 1 0 0 -1
/lmatrix 'B vs C'
Treatment 0 1 -1 0
/lmatrix 'B vs D'
Treatment 0 1 0 -1
/lmatrix 'C vs D'
Treatment 0 0 1 -1.

Contrast 1 (A vs B) is Statistically Significantly Different F(1,35) = 41.692 p=.000

Test Results

Dependent Variable:Score

Source     Sum of Squares    df        Mean Square     F        Sig.

Contrast          4622.048        1         4622.048   41.692        .000

Error             3880.115        35         110.860

Contrast 2 (A vs C) is Statistically Significantly Different F(1,35) = 22.504 p=.000

Test Results

Dependent Variable:Score

Source     Sum of Squares    df        Mean Square     F        Sig.

Contrast          2494.815        1         2494.815   22.504        .000

Error             3880.115        35         110.860

Contrast 3 (A vs D) is Statistically Significantly Different F(1,35) = 4.477 p=.042

Test Results

Dependent Variable:Score

Source     Sum of Squares    df        Mean Square     F        Sig.
Barbara Gruber
Contrast           496.376         1          496.376    4.477       .042

Error             3880.115         35         110.860

Contrast 4 (B vs C) is NOT Statistically Significantly Different F(1,35) = 2.976 p=.093

Test Results

Dependent Variable:Score

Source     Sum of Squares     df        Mean Square     F         Sig.

Contrast           329.880         1          329.880    2.976       .093

Error             3880.115         35         110.860

Contrast 5 (B vs D) is Statistically Significantly Different F(1,35) = 18.954 p=.000

Test Results

Dependent Variable:Score

Source     Sum of Squares     df        Mean Square     F         Sig.

Contrast          2101.250         1         2101.250   18.954       .000

Error             3880.115         35         110.860

Contrast 6 (C vs D) is Statistically Significantly Different F(1,35) = 6.903 p=.013

Test Results

Dependent Variable:Score

Source     Sum of Squares     df        Mean Square     F         Sig.

Contrast           765.255         1          765.255    6.903       .013

Error             3880.115         35         110.860

i. Use a Bonferroni adjustment to evaluate the post hoc comparisons on the adjusted group means and
control the family-wise error rate to .05. Which groups are statistically significantly different? Indicate
adjusted alpha level and the statistically significant comparisons. (2 pts)
α = α*/c = .265/6 = .044
c = [k(k-1)]/2 = 4(4-1)/2 = 12/2 = 6
α* = 1-(1- α)c = 1 – (1-.05)6 = 1 – (.95)6 = 1-.735 = .265

Contrast 1 (A vs B) is Statistically Significantly Different F(1,35) = 41.692 p=.000
Contrast 2 (A vs C) is Statistically Significantly Different F(1,35) = 22.504 p=.000
Contrast 3 (A vs D) is Statistically Significantly Different F(1,35) = 4.477 p=.042
Contrast 4 (B vs C) is NOT Statistically Significantly Different F(1,35) = 2.976 p=.093
Contrast 5 (B vs D) is Statistically Significantly Different F(1,35) = 18.954 p=.000
Contrast 6 (C vs D) is Statistically Significantly Different F(1,35) = 6.903 p=.013
Barbara Gruber
j. Compute the effect sizes of all pairwise comparisons. (2 pts)

dAB = |-30.493| = 30.493 = 2.90                                             | Yi  Y j |
d ij 
√110.860 10.529                                                           MSW

dAC = |-22.368| = 22.368 = 2.12
√110.860 10.529

dAD = |-9.993| = 9,993 = .95
√110.860 10.529

dBC = |8.125| = 8.125 = .77
√110.860 10.529

dBD = |20.500| = 20.500 = 1.95
√110.860 10.529

dCD = |12.375| = 12.375 = 1.18
√110.860 10.529

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 0 posted: 2/12/2012 language: pages: 16
How are you planning on using Docstoc?