Carl Rogers was fond of discussing the discrepancy between the “actual self” (your current view of yourself) and the
“ideal self”(your self as you would truly like to be). He held that the goal of therapy was to reduce the discrepancy
between the two selves. A number of methods have been used over the years to measure such discrepancies within
individuals‟ self-concepts, and efforts have been made to correlate self-discrepancies with measures of self-esteem.
Suppose a psychologist measures self-discrepancy on a scale that ranges from 0 (no discrepancy) to 24 and self-
esteem on a scale that ranges from 0 (low self-esteem) to 50, and she obtains the following data:
a. Compute descriptive statistics, Pearson‟s r, and the covariance in SPSS. Is the correlation significant?
b. What is the proportion of overlapping variance between discrepancies and self-esteem?
c. Create a scatterplot of the data.
d. Treat “Esteem” as the dependent variable (outcome variable) and “Discrepancy” as the independent variable
(predictor variable) in a bivariate regression. Based on the results from „a‟, compute the regression coefficient
and y-intercept (show your simple computations here). Check your results in SPSS by running a bivariate
e. What is multiple R2 and how does it compare to the proportion of overlapping variance you computed for „b‟
above? Is the regression weight statistically significant? What is the observed p-value for the regression weight,
and how does it compare to the p-value for the correlation above?
f. Plug each discrepancy score above into the regression equation and compute each person‟s predicted self-esteem
score (if you want to save time use an SPSS compute statement). Enter these scores into SPSS as a new variable
and compute Pearson‟s r between the predicted self-esteem scores and the observed self-esteem scores above.
What is the value? Explain why you obtained this result.
g. Now compute the following values for each person (again, this is easiest in SPSS using compute statements):
1. ( y y ) 2
2. ( y y ) 2
3. ( y y ) 2
Sum the results separately for each set of values and note how they compare to SS total, SSregression, and SSresidual on
your regression output.
Compute Multiple R2 from the SS values.
2. Download and open the SPSS Berkeley Guidance study data from the course website.
A. Using SOMA as the Dependent Variable, build a regression model with the following predictors: wt2, ht2,
wt9, ht9, lg9, st9. Is the overall model statistically significant? How much variance in the DV was
explained by the linear combination of the IVs? Which predictors, if any, are statistically significant?
C. Transform the SOMA variable and the six predictors to z-scores in SPSS. Run the regression analysis (z-
SOMA as DV, six z-scores for predictors) using these transformed scores. Compare the unstandardized
and standardized regression weights. Are they similar/different? How do they compare to the original
regression above? What about their significance levels? How do the regression weights compare to the
bivariate correlations between SOMA and each of the IVs?
B. Using SOMA as the DV, let SPSS build a regression model from the six predictors using the Forward,
Backward, and Stepwise methods. Record the final model from each approach and compare. Did the
methods yield the same final model?
D. Test the following regression model:
SOMA = a + b1(wt2) + b2(ht2) + ε
Record the beta weights and multiple R2 value. Now switch the order of the predictors when entering them
into the model:
SOMA = a + b1(ht2) + b2(wt2) + ε
Record the beta weights and the multiple R2 value. Did the order of the IVs in the model impact the
Next test the following regression model:
SOMA = a + b1(wt2) + b2(ht2) + b3(st9) + ε
Did R2 increase or decrease compared to the models above? What did you expect to happen with regard to
R2 in this model compared to those above?