Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Normal - UMdrive

VIEWS: 0 PAGES: 57

									This file contains worksheets for each of the major problems we will discuss in ISDS 2710


Discrete       (Counting… How many?)
               Binomial                    (Fill in the basic idea here.)
               Hypergeometric              (Fill in the basic idea here.)
               Poisson                     (Fill in the basic idea here.)

Continuous     (Measuring… How much?)
               Uniform                (Fill in the basic idea here.)
               Exponential            (Fill in the basic idea here.)
               Normal                 (Fill in the basic idea here.)

sampling
               sample means                (Fill in the basic idea here.)
               sample proportions          (Fill in the basic idea here.)
               confidence intervals        (Fill in the basic idea here.)
                              means
                                         σ known                   (Fill in the basic idea here.)
                                         σ unknown                 (Fill in the basic idea here.)
                              proportions
               Hypothesis testing        (Fill in the basic idea here.)
                                                                                                 BACK TO MAIN
BINOMIAL WORKSHEET

Problem description:     (Put a brief summary of your problem here.)

BASIC IDEA:

Parameters:
              n           =#trials                                                  4
              p           = probability of success on each trial                  0.2
              x           = #successes of interest                                  1

Computation
              f(x) = probability of exactly x successes
                               0.4096
              F(x) = probability of no more than x successes
                               0.8192
              E(x) = expected #successes
                          =n*p
                                   0.8                       This is the expected #fries I expect to sell given 8 customers.
              variance
                          = n*p*(1-p)
                                  0.64
              stdev
                                   0.8
              CV
                                100%                         NOTE: My standard is very large!

Discussion
              Once you have calculated your results, I want two statements
              1. A summary of the results
              2. A discussion of how you will use this for the problem at hand

Example#1

BINOMIAL WORKSHEET

Problem description:     Suppose that I've been collecting data at Chick-fil-A
                         I notice that 15% of all customers buy french fries
                         suppose that 8 customers will come in today. (we'll keep it small!)
                         How many fries will we sell?
BASIC IDEA:
              Success= buying french fries; trial = one customer
Parameters:
              n           =#trials                                                  8
              p           = probability of success on each trial                 0.15
              x           = #successes of interest                                  1
Computation
              f(x) = probability of exactly x successes
                           0.384693
              F(x) = probability of no more than x successes
                           0.657183
              E(x) = expected #successes
                          =n*p
                                   1.2
              variance
                          = n*p*(1-p)
                                  1.02
              stdev
                             1.00995
              CV
                           0.841625

Let's calculate the probabilities for each possibility
#fries sold Probability
            0       0.272 Make sure you understand the formulas.
            1       0.385 Create a column graph of this data.
            2       0.238
            3       0.084
            4       0.018
            5       0.003
            6       0.000
            7       0.000
            8       0.000

let's also calculate cumulative probabilities:
#fries sold
             0                 0.272 Make sure you understand the formulas.
             1 or less         0.657 Create a column graph of this data.
             2 or less         0.895
             3 or less         0.979
             4 or less         0.997
             5 or less         1.000
             6 or less         1.000
             7 or less         1.000
             8 or less         1.000

Questions:
              1. Why is it generally more useful to know the cumulative probability, rather than the
              probability for a single value?
              2. Why is this even more evident as the number of trials increases?

GOING ONLINE
Find an article based on the theme, which gives some suggestion of trials and successes.
Using data from the article, if available, (and possibly scaled down), set up your own Binomial.
Be prepared to share in the discussion.
 BACK TO MAIN




             (I've just put in some sample #s.)




t to sell given 8 customers.
own Binomial.
                                                                                                     BACK TO MAIN
HYPERGEOMETRIC WORKSHEET

Problem description: (Put a brief summary of your problem here.)

BASIC IDEA:

Parameters:
          BUCKET_INFO                                                                     Pull
          N         = #items in total                                           8         n           =#items pulled
          n1        = #type 1 items                                             5         x1          = #type 1 pulled
          n2        = #type 2 items                                             3         x2          = #type 2 pulled

Computation
          f(x1,x2) = probability of pulling out x1 type 1 and x2 type 2                   NOTE: It is very easy to have more th
                        0.428571                                                          Each type adds an addition combina
                                                                                          in the numerator
Discussion
             Once you have calculated your results, I want two statements
             1. A summary of the results
             2. A discussion of how you will use this for the problem at hand

Example#1
HYPERGEOMETRIC WORKSHEET

Suppose we want to look at the possible outcomes when we have 12 people in the class
suppose that 8 have previously taken an online class, and 4 have not.
If we pick a random group of 4 people, what is the likelihood that 3 members of the group have been online before?
BASIC IDEA:

Parameters:
          BUCKET_INFO                                                                     Pull
          N        #students In the class                                   12            n          #in the group
          n1       #who have been online previously                          8            x1         #online in grp.
          n2       #who have no prior online experience                      4            x2         #not_on in grp.

Computation
          f(3,1) = probability of pulling out 3 online students, and 1 with no online experience
                       0.452525

Discussion

             1. A summary of the results
                         The chance of such a group is 45%
             2. A discussion of how you will use this for the problem at hand
                         Having a group made up of a good # of online people is likely.
                         (To think about: How is this relevant for this class?)
GOING ONLINE
         Find an article based on the theme, which gives some suggestion of different types of items
         Using data from the article, if available, (and possibly scaled down), set up your own Hypergeometric
         Be prepared to share in the discussion.
 BACK TO MAIN




 =#items pulled         =              4
 = #type 1 pulled       =              3
 = #type 2 pulled       =              1


 very easy to have more than 2 types
adds an addition combination term




n online before?




 #in the group          =              4
 #online in grp.        =              3
 #not_on in grp.        =              1
pergeometric
POISSON WORKSHEET

suppose I'd like to know the probability of 24 arrivals in 3 days, when I expect 12/day

BASIC IDEA:

Parameters:
          Expected # arrivals in a period of length one day                   12
          µ = scaled expectation, if necessary, for the time period of interest                       36
          x=#arrivals of interest                                             24


Computation
          f(x)= likelihood of x arrivals in the time period
                        f(24)=       0.008394
          F(x) = likelihood of x or fewer arrivals in the time period              (Again, consider why it is usually more helpfu
                        F(24)=       0.022446                                      F(x) than f(x).)
          Expected # arrrivals
                                             12
          variance
                                             12
          standard deviation
                                     3.464102
          CV
                                           29%                                     NOTE: For the Poisson, we will have a large s


Discussion
             Once you have calculated your results, I want two statements
             1. A summary of the results
             2. A discussion of how you will use this for the problem at hand

Example#1
POISSON WORKSHEET

Suppose that the University has collected information, and has noticed that on average, 160 new students sign up for online p
           each semester.
What is the likelihood that no more than 175 new students join in the fall?

BASIC IDEA:

Parameters:
          Expected # new students each semester                              160
          µ = scaled expectation, if necessary, for the time period of interest                      160
          x=#arrivals of interest                                            175
Computation
           f(175)= likelihood of exactly 175
                         f(175)=     0.015238
           F(175) = likelihood of 175 or less
                         F(175)=     0.888642
           NOTE: This also tells me the likelihood of more than 175 new students:
                          = 1-F(175) 0.111358
Discussion
           Once you have calculated your results, I want two statements
           1. There is an 89% chance that enrollment will be 175 or less
           2. this will help me plan for very large, or very small, classes
           the chance of a large class, 10%, is too large to be ignored.
GOING ONLINE
           Find an article based on the theme, which gives some suggestion of "arrivals."
           Using data from the article, if available, (and possibly scaled down), set up your own Poisson
           Be prepared to share in the discussion.
            BACK TO MAIN


            (I've put in sample values)




            (=12/day*3days)




sider why it is usually more helpful to know




the Poisson, we will have a large standard deviation.




0 new students sign up for online programs
                                                        Let's look at the probabilities   1. Create a column graph of the d
                                                        for a few values of x             2. what can you say about the sh
                                                        x           f(x)                  3. How does the mean relate to t
                        MORE DETAIL===>!                          0           0
                                                                 10           0
                                                                 20           0
                                                                 30           0
                                                                 40           0
                                                                 50           0
                                                                 60           0
               70           0
               80   9.72E-13
               90   5.15E-10
              100   9.01E-08
              110   5.82E-06
              120   0.000152
              130   0.001729
              140   0.009132
              150   0.023658
              160   0.031523
              170   0.022516
              180   0.008944
              190   0.002041
              200   0.000275
              210   2.26E-05
own Poisson   220   1.15E-06
              230   3.72E-08
              240     7.8E-10
              250   1.08E-11
              260       1E-13
              270           0
              280           0
              290           0
              300           0
              310           0
              320           0
              330           0
              340           0
              350           0
              360           0
              370           0
              380           0
              390           0
              400           0
              410           0
              420           0
1. Create a column graph of the data.
2. what can you say about the shape?
3. How does the mean relate to the graph?
UNIFORM WORKSHEET

Description of problem:
           Suppose we know that the typical homeowner uses at least 100 gallons of water each day, but no more than 200
           what is the likelihood of using between 125 and 140 gallons?
BASIC IDEA:

Parameters:
          a = minimum                     100
          b = maximum                     200
          lowerlimit                      125
          upperlimit                      175

Computation
           f(lowerlimit,upperlimit)= likelihood of an outcome between the lowerlimit and upperlimit)
                       f(125,175)=          0.5
           E(outcome)
                        = (a+b)/2          150
           variance
                        = (b-a)^2/12833.3333
           std. dev
                                    28.86751
           CV
                                          19%              (since the CV is above 10% , our stdev is large)
Discussion
           Once you have calculated your results, I want two statements
           1. A summary of the results
           2. A discussion of how you will use this for the problem at hand

Example#1
UNIFORM WORKSHEET

Description of problem:
           Suppose that I believe that an online student will spend between 1 and 8 hours per week on class material
           what is the likelihood of a student spending less that 3 hours on class?
BASIC IDEA:

Parameters:
          a = minimum                        1
          b = maximum                        8
          lowerlimit                         1
          upperlimit                         3

Computation
          f(lowerlimit,upperlimit)= likelihood of an outcome between the lowerlimit and upperlimit)
                       f(125,175)= 0.285714
             E(outcome)
                        = (a+b)/2         4.5
             variance
                        = (b-a)^2/124.083333
             std. dev
                                    2.020726
             CV
                                         45%             (since the CV is well above 10% , our stdev is large)
Discussion

         1. 28%of students will spend no more than 3 hours on the class, if my estimates are correct.
         2. I might need to encourage more online effort. I should also note that the standard deviation is very large; a lot
GOING ONLINE
         Find an article based on the theme, which gives some suggestion of a continuous problem.
         Using data from the article, if available, (and possibly scaled down), set up your own uniform
         Be prepared to share in the discussion.
            BACK TO MAIN
                                    We might use the uniform, based on a sample. Suppose that we had surveyed
                                    Their daily usages are shown below
                                                       149       191        107      189
r each day, but no more than 200                       167       164        113      164
                                                       151       126        127      159
                                                       134       134        160      190
                      MORE DETAIL                      135       197        145      162
                                                       163       187        171      117
                                    suppose we create a histogram of the data:
                                    Bins               100
                                                       120
                                                       140
                                                       160
                                                       180
                                                       200

                                                  Bin         Frequency
                                                        100           0
                                                        120          10                        15
                                                        140           3




                                                                                   Frequency
                                                                                               10
                                                        160           9
                                                        180           6                         5
                                                        200           8                         0
dev is large)                                 More                    0                              100
                                                                                                    100




                                              Since there doesn't appear to be any real pattern, other than all bet
                                              likely outcomes. (Of course, larger samples might tell us more!)




per week on class material
our stdev is large)


 are correct.
ndard deviation is very large; a lot of variability among student participation should be expected.


own uniform
e. Suppose that we had surveyed 36 customers

                   154        176
                   120        112
                   144        150
                   161        139
                   177        151
                   149        125




                      Histogram
                    water usage/day


                                                        Frequency

             100     140 140
            100 120120    160        160     More 200
                                    180 200 180            More
                              quantity(gallons)
                             Bin



any real pattern, other than all between 100 and 200, we assume equally
 r samples might tell us more!)
EXPONENTIAL WORKSHEET

Description of problem:
           suppose that we know that there is an average of 1.5 minutes between arrivals
           What is the likelihood the next arrival is less than 30 seconds(half a minute) later?
BASIC IDEA:

Parameters:
          µ                                1.5
          x                                0.5

Computation
           F(x) = probability that the next arrival is within x time periods
                       F(.5)         0.283469
           E(outcome)
                        =µ                 1.5
           variance
                        =µ2               2.25
           std. dev
                                           1.5
           CV
                                         100%                (since the CV is 100% , our stdev is large)
                                                             (NOTE: for the exponential, the standard deviation will always be la
Discussion
           Once you have calculated your results, I want two statements
           1. A summary of the results
           2. A discussion of how you will use this for the problem at hand

Example#1
EXPONENTIAL WORKSHEET

Description of problem:
           Suppose that I believe that an online student will send me an email question every 3 hours, on average.
           what is the likelihood of there being less than 1 hour between successive emails?
BASIC IDEA:
           an arrival is an email message.
Parameters:
           µ                                3
           x                                1

Computation
          F(x) = probability that the next arrival is within x time periods
                      F(1)          0.283469
          E(outcome)
                      =µ                    3
             variance
                         =µ2                   9
             std. dev
                                               3
             CV
                                          100%               Note: for the Exponential, the CV is
                                                             always 100%!
Discussion

             1. There is a 28% chance there will be less than an hour between successive emails.
             2. It it likely that many emails will come in frequently, and only rarely will there be a long gap.

GOING ONLINE
         Find an article based on the theme, which gives some suggestion of "arrivals."
         Using data from the article, if available, (and possibly scaled down), set up your own Exponential
         Be prepared to share in the discussion.
            BACK TO MAIN
                                          Do you remember that example we saw in the demo_numerical file
            MORE DETAIL                   the average time was 1.5 minutes

                                          Suppose that we had a larger sample, say 100 values:
                                               1.09       0.59        0.20        4.04
                                               1.97       1.81        0.93        1.94
                                               0.06       0.79        0.54        2.03
                                               0.48       2.51        0.59        0.62
                                               0.66       1.14        1.74        2.61
                                               1.03       2.85        0.43        0.65
                                               0.15       0.82        0.28        0.07
                                               0.17       2.72        3.14        3.45
                                               0.77       2.63        0.13        2.03
                                               8.48       0.06        5.41        0.66

                                          we could calculate the mean
                                                           1.58

                                          1. Create a histogram of the sample data, using 25-30 bins.




andard deviation will always be large.)




                                                                            1.2
ery 3 hours, on average.
                                                                              1

                                                                            0.8

                                                                            0.6

                                                                            0.4

                                                                            0.2

                                                                              0
                                                                                  1 2 3
                  1 2 3




be a long gap.




own Exponential
we saw in the demo_numerical file, about time between arrivals?


 ple, say 100 values:
                    2.13       0.18        0.67                4.72   2.60     2.74
                    0.92       1.26        1.12                1.26   0.94     0.62
                    2.65       2.18        2.58                1.91   0.17     0.33
                    0.67       0.87        3.40                0.46   5.04     1.61
                    2.08       3.54        0.26                5.40   0.23     1.27
                    2.33       0.21        0.36                0.05   0.56     1.13
                    0.02       0.29        0.34                1.51   1.71     1.23
                    2.92       2.03        1.82                0.07   1.76     1.23
                    2.63       1.97        0.36                0.85   1.43     1.51
                    4.03       7.50        0.18                0.29   0.32     0.91

             And the standard deviation
                         1.561161

 le data, using 25-30 bins.


                                                                             Histogram
                                                               1.5
                                                   Frequency




                                                                1

                                                               0.5                       Frequency

                                                                0
                                                                               Bin




                                                                               Series1
                                                                               Series2




             3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
NORMAL WORKSHEET

Description of problem:
           Suppose we know that online students tend to surf the web an average of 9 hours/week,
           with a standard deviation of 5 hours.
           What is the likelihood of spending between 9 and 12 hours on the web?
BASIC IDEA:

Parameters:
            µ                                 9
            σ                                 5
            x                                12
Computation steps
1. graph the points, and shade the area of interest
(shaded in light blue)
2. calculate z
             = (x-µ)/σ =
                      0.6
3. Look up F(z)
            F(.6) =         0.2257

4. Shade in F(z)
(shaded in red)
5. determine the probability of interest
Since the blue shaded area and red shaded areas                                        µ=9, σ=5
match up, the probability of interest is:
                 0.2257

6.Discussion
           Once you have calculated your results, I want two statements
           1. A summary of the results
           2. A discussion of how you will use this for the problem at hand

Example#1
Description of problem:
           Suppose we know that online students tend to surf the web an average of 9 hours/week,
           with a standard deviation of 5 hours.
           Now, suppose we want to know the likelihood that a student surfs more than 12 hours/week.
BASIC IDEA:

Parameters:
            µ                                 9
            σ                                 5
            x                                12
Computation steps
1. graph the points, and shade the area of interest
(shaded in light blue)
(Notice how this area differs from the previous
example.)
2. calculate z
             = (x-µ)/σ =
                      0.6
3. Look up F(z)
            F(.6) =          0.2257

4. Shade in F(z)
(shaded in red)
                                                                                        µ=9, σ=5
5. determine the probability of interest
Since the blue shaded area and red shaded areas
complement, we subtract
             = .5-.2257=    0.2743

6.Discussion
           Once you have calculated your results, I want two statements
           1. A summary of the results
           2. A discussion of how you will use this for the problem at hand
Example#2
Description of problem:
           Suppose we know that online students tend to surf the web an average of 9 hours/week,
           with a standard deviation of 5 hours.
           Now, suppose we want to know the likelihood that a student surfs less than 6 hours/week.
BASIC IDEA:

Parameters:
            µ                                  9
            σ                                  5
            x                                  6
Computation steps                                         x=6
1. graph the points, and shade the area of interest         x=6
(shaded in light blue)
(Notice how this area differs from the previous
example.)
2. calculate z
             = (x-µ)/σ = (Notice z is negative, but
                     -0.6 remember, normal is
3. Look up F(z)           symmetric around µ)
            F(.6) =          0.2257

4. Shade in F(z)
(shaded in red)                                                                x=6       µ=9, σ=5
5. determine the probability of interest
Since the blue shaded area and red shaded areas
complement, we subtract
             = .5-.2257=    0.2743
6.Discussion
           Once you have calculated your results, I want two statements
           1. A summary of the results
           2. A discussion of how you will use this for the problem at hand
Example#3
Description of problem:
           Suppose we know that online students tend to surf the web an average of 9 hours/week,
           with a standard deviation of 5 hours.
           Now, suppose we want to know the likelihood that a student surfs between 6 and 10 hours/week.
                         NOTE: We have two x-values now, so we're going to deermine 2 z-values.
BASIC IDEA:

Parameters:
            µ                                 9
            σ                                 5
            x1                                6
            x2                               10
Computation steps
1. graph the points, and shade the area of interest
(shaded in light blue)


2. calculate z1 :
             = (x1-µ)/σ =
                     -0.6
and z2:
             = (x2-µ)/σ =
                      0.2
                                                                              x=6        µ=9,    x=10
                                                                                         σ=5
3. Look up F(z1)
            F(.6) =         0.2257
and F(z2)
            F(.2) =         0.0793

4. Shade in F(z1)
(shaded in red)
and F(z2)
(shaded in greeen)

5. determine the probability of interest
since the blue-shaded area = (red-shaded area) + (green-shaded area) we add the probabilities
             = ..2257+ .0793             0.31


6.Discussion
           Once you have calculated your results, I want two statements
          1. A summary of the results
          2. A discussion of how you will use this for the problem at hand

Example 4. - YOUR TURN
SOMETHING TO THINK ABOUT
           Notice that the Standard normal table only includes z values up to 3.09
           Notice the pattern of the probabilities. Why is it satisfactory to conclude that if z>3.09,
           then F(z)=.5
Description of problem:
           Suppose we know that online students tend to surf the web an average of 9 hours/week,
           with a standard deviation of 5 hours.
           Now, suppose we want to know the likelihood that a student surfs between 11 and 14 hours/week.

BASIC IDEA:

Parameters:
          µ                                 9
          σ                                 5
          x1
          x2
Computation steps




                                                                             x=6                x=10
                                                                                       µ=9,
6.Discussion
           Once you have calculated your results, I want two statements
           1. A summary of the results
           2. A discussion of how you will use this for the problem at hand

GOING ONLINE
         Find an article based on the theme, which gives some suggestion of a symmetric continuous variable.
         Using data from the article, if available, (and possibly scaled down), set up your own Normal
         Be prepared to share in the discussion.
          BACK TO MAIN            Standard Normal
          STANDARD NORMAL TABLE

                                                    0      0.01      0.02
                                          0         0    0.004     0.008
                                        0.1    0.0398   0.0438    0.0478
                                        0.2    0.0793   0.0832    0.0871
                                        0.3    0.1179   0.1217    0.1255
                                        0.4    0.1554   0.1591    0.1628
                                        0.5    0.1915    0.195    0.1985
                                        0.6    0.2257   0.2291    0.2324
                                        0.7     0.258   0.2611    0.2642
                                        0.8    0.2881    0.291    0.2939
                                        0.9    0.3159   0.3186    0.3212
                                          1    0.3413   0.3438    0.3461
                                        1.1    0.3643   0.3665    0.3686
                                        1.2    0.3849   0.3869    0.3888
                                        1.3    0.4032   0.4049    0.4066
                                        1.4    0.4192   0.4207    0.4222
                                        1.5    0.4332   0.4345    0.4357
                                        1.6    0.4452   0.4463    0.4474
                                        1.7    0.4554   0.4564    0.4573
                                        1.8    0.4641   0.4649    0.4656
                                        1.9    0.4713   0.4719    0.4726
              x=12                        2    0.4772   0.4778    0.4783




                                        2.1    0.4821   0.4826     0.483
                                        2.2    0.4861   0.4864    0.4868
                                        2.3    0.4893   0.4896    0.4898
                                        2.4    0.4918    0.492    0.4922
                                        2.5    0.4938    0.494    0.4941
                                        2.6    0.4953   0.4955    0.4956
                                        2.7    0.4965   0.4966    0.4967
                                        2.8    0.4974   0.4975    0.4976
                                        2.9    0.4981   0.4982    0.4982
hours/week.                               3    0.4987   0.4987    0.4987
x=12
d 10 hours/week.




         x=10
nd 14 hours/week.




         x=10
continuous variable.
   0.03      0.04      0.05      0.06      0.07      0.08      0.09   BACK TO NORMAL
 0.012     0.016    0.0199    0.0239    0.0279    0.0319    0.0359    BACK TO SAMPLE MEANS
0.0517    0.0557    0.0596    0.0636    0.0675    0.0714    0.0753    BACK TO SAMPLE PROPORTIONS
 0.091    0.0948    0.0987    0.1026    0.1064    0.1103    0.1141    BACK TO CI_MEANS_σKNOWN
0.1293    0.1331    0.1368    0.1406    0.1443     0.148    0.1517    BACK TO CI_MEANS_σUNKNOWN
0.1664       0.17   0.1736    0.1772    0.1808    0.1844    0.1879    BACK TO CI_PROPORTIONS
0.2019    0.2054    0.2088    0.2123    0.2157     0.219    0.2224    BACK TO HT
0.2357    0.2389    0.2422    0.2454    0.2486    0.2517    0.2549
0.2673    0.2704    0.2734    0.2764    0.2794    0.2823    0.2852
0.2967    0.2995    0.3023    0.3051    0.3078    0.3106    0.3133
0.3238    0.3264    0.3289    0.3315     0.334    0.3365    0.3389
0.3485    0.3508    0.3531    0.3554    0.3577    0.3599    0.3621
0.3708    0.3729    0.3749     0.377     0.379     0.381     0.383
0.3907    0.3925    0.3944    0.3962     0.398    0.3997    0.4015
0.4082    0.4099    0.4115    0.4131    0.4147    0.4162    0.4177
0.4236    0.4251    0.4265    0.4279    0.4292    0.4306    0.4319
 0.437    0.4382    0.4394    0.4406    0.4418    0.4429    0.4441
0.4484    0.4495    0.4505    0.4515    0.4525    0.4535    0.4545
0.4582    0.4591    0.4599    0.4608    0.4616    0.4625    0.4633
0.4664    0.4671    0.4678    0.4686    0.4693    0.4699    0.4706
0.4732    0.4738    0.4744     0.475    0.4756    0.4761    0.4767
0.4788    0.4793    0.4798    0.4803    0.4808    0.4812    0.4817




0.4834    0.4838    0.4842    0.4846     0.485    0.4854    0.4857
0.4871    0.4875    0.4878    0.4881    0.4884    0.4887     0.489
0.4901    0.4904    0.4906    0.4909    0.4911    0.4913    0.4916
0.4925    0.4927    0.4929    0.4931    0.4932    0.4934    0.4936
0.4943    0.4945    0.4946    0.4948    0.4949    0.4951    0.4952
0.4957    0.4959     0.496    0.4961    0.4962    0.4963    0.4964
0.4968    0.4969     0.497    0.4971    0.4972    0.4973    0.4974
0.4977    0.4977    0.4978    0.4979    0.4979     0.498    0.4981
0.4983    0.4984    0.4984    0.4985    0.4985    0.4986    0.4986
0.4988    0.4988    0.4989    0.4989    0.4989     0.499     0.499
BACK TO NORMAL
BACK TO SAMPLE MEANS
BACK TO SAMPLE PROPORTIONS
BACK TO CI_MEANS_σKNOWN
BACK TO CI_MEANS_σUNKNOWN
BACK TO CI_PROPORTIONS
BACK TO HT
SAMPLE MEANS WORKSHEET

Description of problem:
           Suppose we know that a typical student spends an average of 20 hours/week on school work,σ=8 hours.
           Suppse that a survey of 10 online students showed an average of 23 hours/week.
           Is this a significant difference?
BASIC IDEA:

Parameters:
          µ             population mean                   20         n           sample size
          σ             population std. dev                8
          xbar          sample mean                       23

Computation
1. Since this is a sampling problem, we first calculate
standard deviation of sample distribution
             σn = σ/sqrt(n)
             2.529822

2. Calculate z:
             = (xbar-µ)/σn
             1.185854

3. Look up F(z) in standard normal table
                  0.383

4. Discuss, using the 40% rule                                                                  µ=20,
            1. Since F(z) < .40, we conclude that the
            difference is not significant
            2. Online student work habits are not very different from regular students.

GOING ONLINE
         Find an article based on the theme, which gives some suggestion of a sample
         Using data from the article, if available, (and possibly scaled down), set up your own sample means problem.
         Be prepared to share in the discussion.
           BACK TO MAIN
           GOTO STD NORMAL TABLE


n school work,σ=8 hours.




                   10




       µ=20, σ=2           xbar=12




own sample means problem.
SAMPLE PROPORTIONS WORKSHEET

Description of problem:
           Suppose we know that a typical student is responsible for 25% of his/her tuition, on average.
           Suppse that a survey of 10 online students showed that, on average, they are responsible for 36% of tuition.
           Is this a significant difference?
BASIC IDEA:

Parameters:
          p             population proportion                   0.25
          pbar          sample proportion                       0.36
          n             sample size                               10

Computation
1. Since this is a sampling problem, we first calculate
standard deviation of sample distribution
             σn = sqrt(p*(1-p)/n)
             0.136931

2. Calculate z:
             = (pbar-p)/σn
             0.803326

3. Look up F(z) in standard normal table
                0.2881

4. Discuss, using the 40% rule                                                                       µ=20,
            1. Since F(z) < .40, we conclude that the
            difference is not significant
            2. Online student's contribution to tuition payments are not very different from regular students.

GOING ONLINE
         Find an article based on the theme, which gives some suggestion of a sample
         Using data from the article, if available, (and possibly scaled down), set up your own sample proportions problem
         Be prepared to share in the discussion.
            BACK TO MAIN
            GOTO STD NORMAL TABLE


, on average.
sponsible for 36% of tuition.




        µ=20, σ=2          xbar=12



regular students.




own sample proportions problem.
CONFIDENCE INTERVAL - MEANS σ KNOWN WORKSHEET

Description of problem:
           Suppose that we have surveyed 10 students, and found that the average courseload is 14 hours.
           Suppose we know that σ=4 hours.
           what is the 90% (2-sided) confidence interval?
           (Why 2-sided? In this case, we are concerned on both sides:
                       if


BASIC IDEA:

Parameters:
             xbar        sample mean                             14
             σ           population std. dev                      4
             n           sample size                             10
             CL          confidence level                       80%
Computation
1. To start, we need to determine a z-value, based on the
confidence level:
              σn = sqrt(p*(1-p)/n)
              #NUM!

2. Calculate z:
             = (pbar-p)/σn
              #NUM!

3. Look up F(z) in standard normal table
                0.2881

4. Discuss, using the 40% rule                                                                       µ=20,
            1. Since F(z) < .40, we conclude that the
            difference is not significant
            2. Online student's contribution to tuition payments are not very different from regular students.

GOING ONLINE
         Find an article based on the theme, which gives some suggestion of a sample
         Using data from the article, if available, (and possibly scaled down), set up your own sample proportions problem
         Be prepared to share in the discussion.
            BACK TO MAIN
            GOTO STD NORMAL TABLE                          z-value
            TYPICAL CONFIDENCE LEVELS   Confidence level   2-sided
                                              80%                1.28
load is 14 hours.                             90%                1.64
                                              95%                1.96
                                              99%                2.58




        µ=20, σ=2       xbar=12



regular students.




own sample proportions problem.
z-value
1-sided
      0.84
      1.28
      1.64
      2.33
When σ is unknown, there are a couple fof slight changes that must be made to the worksheet.

Copy over the worksheet for CI, when σ is known, and make the appropriate changes
For a CI-proportions problem, there are a couple fof slight changes that must be made to the worksheet.

Copy over the worksheet for CI, when σ is known, and make the appropriate changes
e worksheet.

								
To top