# Duffy Template by s9onMo

VIEWS: 0 PAGES: 18

• pg 1
```									Chapter 11: Simple Linear Regression Analysis

CHAPTER 11 – Simple Linear Regression Analysis
11.1   When there appears to be a linear relationship between y and x

11.2   y   y x    the observed value of the dependent variable
 y x   0  1 x  the mean value of the y when the value of the independent variable is x
 = error term
11.3   1 : the change in the mean value of the dependent variable that is associated with a one-unit
increase in the value of the independent variable
 0 : the mean value of the dependent variable when the value of the independent variable is
zero

11.4   When data is observed in time sequence, the data is called time series data. Cross-sectional
data is observed at a single point in time.

11.5   The straight line appearance of this data plot suggests that the simple linear regression model
with a positive slope might be appropriate.

11.6   a.    y x  4.00   0  1 (4.00) is the mean of the starting salaries of all marketing graduates
having a 4.00 GPA.

b.    y x  2.50  0  1 (2. 50) is the mean of the starting salaries of all marketing graduates
having a 2.50 GPA.

c.   1  the change in mean starting salary associated with a one point increase in the grade
point average.

d.    0  the mean starting salary for marketing graduates with a grade point average of
0.00.
The interpretation of  0 fails to make practical sense because it requires that a marketing
graduate have a grade point average of 0.00 in which case, of course, the marketing
student would not have graduated.

e.   All factors other than the grade point average. For example, extra-curricular activities
and the type of minor (if any) the graduate has.

11.7   The straight line appearance on this data plot suggest that the simple linear regression model
with a positive slope might be appropriate.

11.8   a.   It is the mean of the service times required when the number of copiers is 4.

b.   It is the mean of the service times required when the number of copiers is 6.

c.   The slope parameter equals the change in the mean service time that is associated with
each additional copier serviced.

d.   The intercept is the mean service time when there are no copiers. It fails to make
practical sense because it requires service time when no copiers exist.

e.   All factors other than the number of copiers serviced.

152
Chapter 11: Simple Linear Regression Analysis

11.9   The plot looks reasonably linear.

11.10   a.    Mean demand when price difference is .10.

b.    Mean demand when price difference is –.05.

c.    Change in mean demand per dollar increase in price difference

d.    Mean demand when price difference = 0; yes

e.    Factors other than price difference; answers will vary.

11.11   a.
1000
y

500

0   10   20   30   40   50     60    70    80   90 100
0
x

b.    Yes, the plot looks linear, positive slope

11.12   a.    Mean labor cost when batch size = 60

b.    Mean labor cost when batch size = 30

c.    Change in mean labor cost per unit increase in batch size

d.    Mean labor cost when batch size = 0; questionable

e.    Factors other than batch size; answers will vary.

11.13   a.
200
Price

150

100       10                 15                20                25
size

153
Chapter 11: Simple Linear Regression Analysis

b.   Yes, the relationship looks to be linear with a positive slope.

11.14   a.   Mean sales price when home size = 20 (2000 square feet)

b.   Mean sales price when home size = 18 (1800 square feet)

c.   Change in mean sales price per one unit (100 square foot) increase in home size

d.   Mean sales price when home size = 0; no

e.   Factors other than home size; answers will vary.

11.15   The quality or “goodness” of the fit of the least squares line to the observed data.

11.16   The “best” line that can be fitted to the observed data. The slope and the intercept of the least
squares line.

11.17   Evaluate y  b0  b1x for the given value of x.
ˆ

11.18   Because we do not know how y and x are related outside the experimental region.

11.19   a.   b0 = 14.8156 b1 = 5.70657

No. The interpretation of b0 does not make practical sense since it indicates that someone
with a GPA = 0 would have a starting salary of \$14,816, when in fact they would not
have graduated with a GPA = 0.

b.    ˆ
y = 14.8156 + 5.70657(3.25) = 33.362

That is, \$33,362

11.20   a.   b0 = 11.4641 b1 = 24.6022

No. The interpretation of b0 does not make practical sense since it indicates that 11.46
minutes of service would be required for a customer with no copiers.

b.    ˆ
y = 11.4641 + 24.6022(4) = 109.873, or 109.9 minutes

11.21   a.   b0 = 7.81409 b1 = 2.66522

Yes. The interpretation of b0 does make practical sense since it indicates that 781,409
bottles of detergent would be demanded when the price difference with other products
is zero.

b.   ˆ
y = 7.814088 + 2.665214(.10) = 8.081
c.
y  b0  b1 x
ˆ

8.5  7.81409  2.6652 x

.68591
x           .257 , or about 26 cents
2.6652

11.22   a.

154
Chapter 11: Simple Linear Regression Analysis

xi                      yi                        xi2                xi yi

5                      71                       25                  355
62                     663                      3844                41106
35                     381                      1225                13335
12                     138                       144                1656
83                     861                      6889                71463
14                     145                       196                2030
46                     493                      2116                22678
52                     548                      2704                28496
23                     251                       529                5773
100                    1024                     10000               102400
41                     435                      1681                17835
75                     772                      5625                57900

 xi  548              yi  5782              xi2  34978        xi yi  365027
( xi )(  yi )
SS xy   xi yi 
n
(548)(5,782 )
 365,027                       100,982 .32
12
( xi ) 2
SS xx   xi2 
n
(548) 2
 34,978               9,952 .667
12
SS xy 100,982 .32
b1                            10.1463
SS xx 9,952 .667
 5782                 548 
b0  y  b1 x              10.1463        18.4880
 12                   12 

b.   b1 is the estimated increase in mean labor cost (10.1463) for every 1 unit increase in the
batch size.
b0 is the estimated mean labor cost (18.4880) when batch size = 0; no.

c.   y  18.488010.1463x
ˆ

d.   ˆ
y = 18.4880 + 10.1463(60) = 627.266

155
Chapter 11: Simple Linear Regression Analysis

11.23   a.     MINITAB output

Regression Analysis: Sale Price versus Size
The regression equation is
Sale Price = 48.0 + 5.70 Size
Predictor         Coef           SE Coef             T           P
Constant         48.02             14.41          3.33       0.010
Size            5.7003            0.7457          7.64       0.000
S = 10.59               R-Sq = 88.0%            R-Sq(adj) = 86.5%

b.     b1 is the estimated increase in mean sales price (5.7003) for every hundred square foot
increase in home size.

b0 is the estimated mean sales price when square footage = 0. No, the interpretation of
b0 makes no practical sense.

c.     y  48.02  5.7003 x.
ˆ

d.     ˆ
y = 48.02 + 5.7003 (20) = 162.026

That is, \$162,026.

11.24   (1) Mean of error terms = 0
(2) Constant variance
(3) Normality
(4) Independence
See page 466 in the text.

11.25    2 ;  That is, the constant variance and standard deviation of the error term populations.
SSE 1.438
11.26   s2              .2876
n2 72

s  s 2  .2876  .5363

SSE 191.70166
11.27   s2                  21.30018
n2   11  2

s  s 2  21.30018  4.61521
11.28   s 2  2. 8059  .1002, s 
28                    s2  .3166

11.29   s 2  747  74.7, s  8.643
10

11.30          SSE 896 .8
s2                 112 .1
n  2 10  2

s  s 2  112 .1  10.588

156
Chapter 11: Simple Linear Regression Analysis

11.31          SSE 222.8242
s2                 27.8530
n2   10  2

s  s 2  27.8530  5.2776

11.32   Strong ( = .05) or very strong ( = .01) evidence that the regression relationship is
significant.

11.33   Explanations will vary.

11.34   a.     b0 = 14.816            b1 = 5.7066

b.     SSE = 1.4381 s2 = .2876 s = .5363

c.     sb1 = .3953            t = 14.44

t = b1 / sb1 = 5.7066 /.3953 = 14.44

d.     df = 5 t.025 = 2.571            Reject Ho, Strong evidence of a significant relationship between
x and y.

e.     t.005 = 4.032          Reject Ho, Very strong evidence of a significant relationship between x
and y.

f.     p-value=.000            Reject at all , Extremely strong evidence of a significant
relationship between x and y.

g.     95% Cl: [ b1 ± t.025 s b1 ] = 5.7066 ± (2.571)(.3953) = [4.690, 6.723]

We are 95% confident that the mean starting salary increases by between \$4690 and
\$6723 for each 1.0 increase in GPA.

h.     99% Cl: [ b1 ± t.005 sb1 ] = 5.7066 ± (4.032)(.3953) = [4.113, 7.300]

We are 99% confident that the mean starting salary increases by between \$4113 and
\$7300 for each 1.0 increase in GPA.

i.     sb0 = 1.235            t = 12.00

t = b0 / sb0 = 14.816 / 1.235 = 12.00

j.     p-value=.000           Reject at all , Extremely strong evidence that the y-intercept is
significant.
s              .5363
s b1                              .3953
SS xx        1.8407
k.
1   x2         1 (3.0814) 2
s b0  s            .5363             1.235
n SS xx        7  1.8407

157
Chapter 11: Simple Linear Regression Analysis

11.35   a.   b0 = 11.4641      b1 = 24.6022

b.   SSE = 191.7017            s2 = 21.3002      s = 4.615

c.   sb1 = .8045       t = 30.580

t = b1 / sb1 = 24.602 /.8045 = 30.580

d.   df = 9 t.025 = 2.262      Reject Ho, strong evidence of a significant relationship between
x and y.

e.   t.005 = 3.250     Reject Ho, very strong evidence of a significant relationship between x
and y.

f.   p-value=.000 Reject at all , extremely strong evidence of a significant relationship
between x and y.

g.   [24.6022  2.262(.8045)] = [22.782, 26.422]

h.   [24.6022  3.250(.8045)] = [21.987,27.217]

i.   sb0 = 3.4390      t = 3.334

t = b0 / sb0 = 11.464 / 3.439 = 3.334

j.   p-value=.0087          Reject at all  except .001
s      4.61521
k.   sb1                   .8045
SS xx    32.909

1   x2             1 3.909 2
sb0  s           4.61521            3.439
n SS xx           11 32.909

11.36   See the solutions to 11.34 for guidance.

a.   b0  7.814, b1  2.665

b.   SSE  2.806, s 2  .100, s  .3166

c.   s b1  .2585, t  10.31

d.   Reject H 0 .

e.   Reject H 0 .

f.   p-value = less than .001; reject H 0 at each value of 

g.   [2.665 ± 2.048(.2585)] = [2.136, 3.194]

h.   [2.665 ± 2.763(.2585)] = [1.951, 3.379]

158
Chapter 11: Simple Linear Regression Analysis

i.   s b0  .0799, t  97.82

j.   p-value = less than .001; reject H 0 .
s          .31656
k.   sb1                             .2585
SS xx       1.49967

1   x2            1   .2133 2
sb0  s            .31656              .079883
n SS xx          30 1.49967

11.37   See the solutions to 11.34 for guidance.

a.   b0 18.488, b1 10.1463

b.   SSE = 747, s 2  75, s = 8.642

c.   sb1  .0866, t 117.13

d.   Reject H 0 .

e.   Reject H 0 .

f.   p-value = .000; reject H 0 at each value of 

g.   [10.1463 ± 2.228(.0866)] = [9.953, 10.339]

h.   [10.1463 ± 3.169(.0866)] = [9.872, 10.421]

i.   sb0  4.677, t  3.95

j.   p-value = .003; fail to reject H 0 at  = .001. Reject H 0 at all other values of 
s       8.64154
k.   sb1                        .086621
SS xx    9952 .667

1   x2             1   45.667 2
sb0  s            8.64154               4.67658
n SS xx           12 9952 .667

11.38   See the solutions to 11.34 for guidance.

a.   b0 = 48.02            b1 = 5.7003

b.   SSE = 896.8           s2 = 112.1       s = 10.588

c.   sb1 = .7457           t = 7.64

t = b1 / sb1 = 5.7003 /.7457 = 7.64

d.   df = 8 t.025 = 2.306             Reject Ho

159
Chapter 11: Simple Linear Regression Analysis

e.   t.005 = 3.355          Reject Ho

f.   p-value=.000           Reject at all 

g.   [3.9807,7.4199]

h.   [3.198,8.202]

i.   sb0 = 14.41            t = 3.33

t = b0 / sb0 = 48.02 / 14.41 = 3.33

j.   p-value=.010           Reject at all  except .01 and .001
s          10.588
k.   sb1                            .7457
SS xx        201 .6

1   x2            1 18.8 2
sb0  s             10.588           14.41
10 SS xx          10 201 .6

11.39   Find sb1 from Minitab
The regression equation is
sales = 66.2 + 4.43 ad exp

Predictor                Coef           SE Coef                 T       P
Constant               66.212             5.767             11.48   0.000
Ad exp                 4.4303            0.5810              7.62   0.000

95% C.I. for β1 [ 4.4303  2.306(.5810) ] = [3.091,5.770]

11.40   a.   b0 = -.1602            b1 = 1.2731

b.   SSE = .1343

c.   s2= .0336 s = .1833

d.   sb1 = .1025

e.   t = 12.4209

f.   p-value=.00024                    Reject at all 

g.   [.9885, 1.5577], 95% confident the mean preference would increase.

11.41   The distance between xo and x , the average of the previously observed values of x.

11.42   A confidence interval is for the mean value of y. A prediction interval is for an individual
value of y.

11.43   The smaller the distance value, the shorter the lengths of the intervals.

11.44   a.   33.362, [32.813, 33.911]

160
Chapter 11: Simple Linear Regression Analysis

b.    33.362, [31.878, 34.846]

1 (3.25  3.0814 ) 2
c.    Distance Value                         .1583
7      1.8407

[33.362 ± 2.571(.5363) .1583 ] = [32.813, 33.911]

[33.362 ± 2.571(.5363) 1  .1583 ] = [31.878, 34.846]

11.45   a.    109.873, [106.721, 113.025]

b.    109.873, [98.967, 120.779]

c.    113 minutes

11.46   a.    8.0806; [7.948, 8.213]

b.    8.0806; [7.419, 8.743]

c.    See graph in text, page 483
2
 .065 
d.    s dist  .065, s  .3166 , dist           .04215
 .3166 
99% C.I.: [8.0806 ± 2.763(.065)] = [7.9010, 8.2602]

                                     
99% P.I.: 8.0806  2.763(.3166 ) 1.04215  7.1877 ,8.9735 

e. (1) 8.4804; [8.360, 8.600]
(2) 8.4804; [7.821, 9.140]
2
 .059 
(3) s dist  .059, s  .3166 , dist           .03473
 .3166 
99% C.I.: [8.4804 ± 2.763(.059)] = [8.3174, 8.6434]

                                     
99% P.I.: 8.4804  2.763(.3166 ) 1.03473  7.5907 ,9.3701

11.47   a.    627.26, [621.05, 633.47]

b.    627.26, [607.03, 647.49]
2
 2.79 
c.    s dist  2.79, s  8.642, dist           .104227
 8.642 
99% C.I.: [627.26 ± 3.169(2.79)] = [(618.42, 636.10)]
                                         
99% P.I.: 627.26  3.169(8.642) 1.104227  598.48,656.04

161
Chapter 11: Simple Linear Regression Analysis

11.48   a.   162.03, [154.04, 170.02]

b.   162.03, [136.33, 187.73]

11.49   2.3429, [1.7367, 2.9491]

11.50   Use a computer program to find the prediction equation y  109 1.075x .
ˆ
Point predictions and prediction intervals are:

a.   87.5, [57.28, 117.72]

b.   76.75, [48.04, 105.46]

c.   66.00, [37.82, 94.18]

d.   55.25, [26.55, 83.96]

e.   44.5, [14.28, 74.72]

11.51 From Minitab
The regression equation is
Market Rate = 0.85 + 0.610 Accounting Rate

Predictor            Coef         SE Coef              T         P
Constant            0.847           1.975           0.43     0.670
Accounti           0.6105          0.1431           4.27     0.000

Predicted Values for New Observations

New Obs       Fit         SE Fit             95.0% CI                   95.0% PI
1          10.004          0.753     (     8.494, 11.514)      (     -0.310, 20.318)

Values of Predictors for New Observations

New Obs    Accounti
1              15.0
a.    y  b0  b1 (15 .00 )  .847  .6105 (15 .00 )  10 .0045
ˆ
95% C.I.: [8.494, 11.514]

b.   10.0045
95% P.I.: [–.310, 20.318]

11.52   Total variation: measures the total amount of variation exhibited by the observed values of y.
Unexplained variation: measures the amount of variation in the values of y that is not
explained by the model (predictor variable).
Explained variation: measures the amount of variation in the values of y that is explained by
the predictor variable.

11.53   Proportion of the total variation in the n observed values of y that is explained by the simple
linear regression model.
11.54   a.   61.380, 1.438, 59.942, .977, .988; 97.7% of the variation in the observations has been
explained by the regression model.
b.   t = 14.44, p-value =.000; Reject Ho at both 
11.55   a.   20110.54, 191.702, 19918.84, .990, .995; 99.0% of the variation in the observations has
been explained by the regression model.

162
Chapter 11: Simple Linear Regression Analysis

b.   t = 30.58, p-value = 000; Reject Ho at both 
11.56   a.   13.459; 2.806; 10.653; r 2  .792 ; r = .890; 79.2% of the variation in the observations
has been explained by the regression model.
b.   t = 10.31, p-value = .000; Reject H 0 .
11.57   a.   9952.667; 7.2486; 9945.418; r2  .999; r = .999; 99.9% of the variation in the
observations has been explained by the regression model.
b.   t = 117.134, p-value = .000; Reject H 0 .

11.58   a.   7447.5, 896.8, 6550.7, .880, .9381; 88.0% of the variation in the observations has been
explained by the regression model.

b.   t=7.64,p-value=.000; Reject Ho at both 

11.59   a.   5.316, .1343, 5.1817, .975, .987; 97.5% of the variation in the observations has been
explained by the regression model.

b.   t = 12.421, p-value .000; Reject Ho at both 

11.60   From Minitab
The regression equation is
sales = 66.2 + 4.43 ad exp

Predictor             Coef       SE Coef               T         P
Constant            66.212         5.767           11.48     0.000
d exp              4.4303        0.5810            7.62     0.000

S = 5.278            R-Sq = 87.9%          R-Sq(adj) = 86.4%

Analysis of Variance

Source                 DF            SS                MS         F          P
Regression              1        1619.3            1619.3     58.14      0.000
Residual Error          8         222.8              27.9
Total                   9        1842.1

a.   1842.1, 222.8, 1619.3, r 2  .879 , r  .9375 ; 87.9% of the variation in the observations
has been explained by the regression model.

b.   t = 7.62, p-value = .000, Reject Ho.

11.61   From Minitab
The regression equation is
Market Rate = 0.85 + 0.610 Accounting Rate

Predictor             Coef       SE Coef                T        P
Constant             0.847         1.975             0.43    0.670
Accounti            0.6105        0.1431             4.27    0.000

S = 5.085            R-Sq = 25.9%          R-Sq(adj) = 24.5%

Analysis of Variance

Source                 DF             SS               MS          F         P

163
Chapter 11: Simple Linear Regression Analysis

Regression               1       470.74           470.74    18.21       0.000
Residual Error          52      1344.33            25.85
Total                   53      1815.07

a.   1,815.07; 1,344.933; 470.74

r2 = .259

r = .509

b.   t = 4.27, p-value = .000, Reject H 0 .
11.62   From Minitab
The regression equation is
HeatLoss = 109 - 1.08 Temperature

Predictor              Coef      SE Coef               T       P
Constant            109.000        9.969           10.93   0.000
Temperat            -1.0750       0.2307           -4.66   0.002

S = 11.30             R-Sq = 75.6%         R-Sq(adj) = 72.1%

Analysis of Variance

Source                  DF           SS               MS        F           P
Regression               1       2773.5           2773.5    21.70       0.002
Residual Error           7        894.5            127.8
Total                    8       3668.0

a.   3,688; 894.5; 2773.5
r2 = .756
r = –.870
b.   t = -4.66, p-value = .002, Reject H 0 .
11.63   Mildly similar views.
11.64   H0: 1  0 versus Ha: 1  0 .
11.65   t-test on 1

11.66   a.   F = 59.942 / (1.438 / 5) = 208.39
b.   F.05 = 6.61    df1 = 1, df2 = 5
Since 208.39 > 6.61, reject H0 with strong evidence of a significant relationship between
x and y.
c.   F.01 = 16.26    df1 = 1, df2 = 5
Since 208.39 > 16.26, reject H0 with very strong evidence of a significant relationship
between x and y.
d.   p-value =.000; Reject H0 at all levels of , extremely strong evidence of a significant
relationship between x and y.
e.   t2 = (14.44)2 = 208.51 (approximately equals F = 208.39)

164
Chapter 11: Simple Linear Regression Analysis

(t.025)2 = (2.571)2 = 6.61 = F.05

11.67   a.   F = 19918.844 / (21.30018 / 9) = 935.149

b.   F.05 = 5.12    df1 = 1, df2 = 9

Since 935.149 > 5.12, reject H0 with strong evidence of a significant relationship
between x and y.
c.   F.01 = 10.56     df1 = 1, df2 = 9

Since 935.149 > 10.56, reject H0 with very strong evidence of a significant relationship
between x and y.
d.   p-value =less than .001; Reject H0 at all levels of , extremely strong evidence of a
significant relationship between x and y.

e.   t2 = (30.58)2 = 935.14 (approximately equals F = 935.149)

(t.025)2 = (2.262)2 = 5.12 = F.05

11.68   a.   F = 106.303

b.   F.05 =4.20, reject H 0 (df1 = 1, df2 = 28). Strong evidence of a significant relationship
between x and y.
c.   F.01 =7.64, reject H 0 (df1 = 1, df2 = 28). Very strong evidence of a significant
relationship between x and y.
d.   p-value = less than .001, reject H 0 . Extremely strong evidence of a significant
relationship between x and y.

e.   (10.310)2 106.303 (within rounding error)
(t.025)2 = 4.19 = F.05

11.69   a.   F = 13,720.47

b.   Reject H 0 .

c.   Reject H 0 .

d.   p-value = .000; reject H 0 .

e.   (117.13)2  13,720.47 (within rounding error)

165
Chapter 11: Simple Linear Regression Analysis

11.70   a.    F = 6550.7 / (896.8 / 8) = 58.43

b.    F.05 = 5.32    df1 = 1, df2 = 8

Since 58.43 > 5.32, reject H0.

c.    F.01 = 11.26    df1 = 1, df2 = 8

Since 58.43 > 11.3, reject H0.

d.    p-value =.000; Reject H0 at all levels of 

e.    t2 = (7.64)2 = 58.37 (approximately equals F = 58.43)

(t.025)2 = (2.306)2 = 5.32 = F.05

11.71   a.    F = 5.1817 / (.13435 / 4) = 154.279

b.    F.05 = 7.71    df1 = 1, df2 = 4

Since 154.279 > 7.71, reject H0.

c.    F.01 = 21.2    df1 = 1, df2 = 4

Since 154.279 > 21.2, reject H0.

d.    p-value =.0002; Reject H0 at all levels of 

e.    t2 = (12.4209)2 = 154.279 (approximately equals F = 154.279)

(t.025)2 = (2.776)2 = 7.71 = F.05

11.72                                                                        ˆ
They should be plotted against each independent variable and against y . Funneling or curved
patterns indicate violations of the regression assumptions.

11.73   Create a histogram, stem-and-leaf, and normal plot.

11.74   Transforming the dependent variable.

11.75   Approximate horizontal band appearance. No violations indicated

11.76   Possible violations of the normality and constant variance assumptions.

11.77   No.

11.78   a.
3(i )  1 3(4)  1
          .3235
3n  1     33  1
.5000  .3235  .1765,  z  .46

3(i )  1 3(10)  1
           .8529
3n  1     33  1
.8529  .5000  .3529,  z  1.05

166
Chapter 11: Simple Linear Regression Analysis

b.   No

11.79   The residual plot has somewhat of a cyclical appearance. Since d=.473 is less than dL, 05=1.27,
we conclude there is positive autocorrelation and since 4 - .473 = 3.527 and this is greater than
dU,.05 = 1.45 we conclude that there is not negative autocorrelation.

11.80   The data plot in Figure 11.40b indicates that as x increases, y increases and becomes more
variable. The residual plot in Figure 11.40c fans out as x increases, again indicating that y
becomes more variables as x increases.

11.81   a.   (i) y*  5.0206
ˆ

95% P.I. for y* = [4.3402, 5.7010]

(ii) y  e 5.0206  151.5022
ˆ

              
95% P.I . for y  e 4.3402 , e 5.7010  76.7229,299.1664 

c.   The residual plot is curved indicating the straight line model does not fit the data
appropriately.

11.82   a.   Yes

y  7(25.0069)  175.048
ˆ
b.   95%C.I .7(21.4335),7(28.5803)  150.0345,200.0621
95% P.I .7(13.3044),7(36.7094)  93.1308,256.9658

Allow 200 minutes.

11.83   a.   Yes; see the plot in part c.

b.   b0  306.619, b1  27.714

c.   y  306.619 27.714x
ˆ

260

240
y

220

2.0           2.4            2.8            3.2
x

167
Chapter 11: Simple Linear Regression Analysis

d.   p-value = .000, reject H 0 , significant

e.   x0  \$2.10; y  248.420; [244.511, 252.327]
ˆ
x0  \$2.75; y  230.405; [226.697, 234.112]
ˆ
x0  \$3.10; y  220.705; [216.415, 224.994]
ˆ

11.84   a.   b1 = -6.4424 For every unit increase in width difference, the mean number of accidents
are reduced by 6.4 per 100 million vehicles.

b.   p-value = .000 Reject H0 at all levels of 

c.   r2 = .984 98.4% of the variation in accidents is explained by the width difference.

11.85   a.   No

b.   Possibly not; Don’t take up smoking

11.86   a.   Using Figure 11.48, there does seem to be a negative relationship between temperature
and o-ring failure.

b.   The temperature of 31 was outside the experimental region.

11.87   Explanations will vary

11.88   a.   Argentina, Turkey, Brazil, and Taiwan

b.   Pakistan and Jordan

11.89   a.   There is a relationship since F = 21.13 with a p-value of .0002.

b.   b1 = 35.2877, [19.2202,51.3553]

11.90   For aggressive stocks a 95% confidence interval for 1 is
*

 t
.0163    .025(.003724)    [.0163 2.365(.003724)]  [.00749, .02512] where t.025   is based
on 7 degrees of freedom.
We are 95% confident that the effect of a one-month increase in the return length time for an
aggressive stock is to increase the mean value of the average estimate of 1 by between
.00749 and .02512.
For defensive stocks a 95% confidence interval for 1 is
*
[–0.00462 ± 2.365(.00084164)] = [–.00661, –.00263].
For neutral stocks a 95% confidence interval for 1 is
*

[.0087255 ± 2.365(.001538)] = [.005088, .01236].

11.91   ˆ
y = 2.0572 + 6.3545(1/5) = 3.3281

168
Chapter 11: Simple Linear Regression Analysis

Internet Exercise

11.92
Scatter Plot
GMAT vs GPA

700
GMAT

650

600
3.1         3.2            3.3          3.4     3.5       3.6
GPA

The regression equation is
GMAT = 184 + 141 GPA

Predictor     Coef SE Coef              T    P
Constant     184.27  84.63            2.18 0.034
GPA          141.08  25.36            5.56 0.000

S = 21.50    R-Sq = 39.2%       R-Sq(adj) = 37.9%

Analysis of Variance

Source             DF          SS       MS         F          P
Regression          1        14316      14316      30.96     0.000
Residual Error     48        22197       462
Total              49        36513

Predicted Values for New Observations

New Obs Fit         SE Fit   95.0% CI         95.0% PI
1   678.06        5.16 ( 667.68, 688.45) ( 633.60, 722.53)

Values of Predictors for New Observations

New Obs      GPA
1
3.50

169

```
To top