MATH 2560CF03 Elementary Statistics I LECTURE 13 Design of by fad10689

VIEWS: 13 PAGES: 12

									           MATH 2560 C F03
          Elementary Statistics I
     LECTURE 13: Design of Experiments.
1    Outline

    ⇒ basic vocabulary of experiments;




    ⇒ comparative experiments;




    ⇒ randomization;




    ⇒ randomized comparative experiments;




    ⇒ how to randomize (Table B or software (Excel));




    ⇒ cautions about experimentation;




    ⇒ matched pairs designs;




    ⇒ block designs;
2    Vocabulary of Experiments
=⇒ The purpose of an experiment: to reveal the response of one variable
(response) to changes in other variables (explanatory).
    A study is an experiment when we actually do something to peopole,
animals, or subjects in order to observe the response.
    Table below gives the basic vocabulary of experiments.

Experimental Units, Subjects, Treatment

1. Experimental Units: the individuals on which the experiment is done;

2. Subjects: the units are human beings;

3. Treatment: a specific experimental condition applied to the units.

   =⇒ Factors: the explanatory variables in an experiment;
   =⇒ Levels: each teratment is formed by combining a specific values of
each of the factors.
   Explain these notions by the following example (see Figure 3.1).




Example 3.3. Does regularly taking aspirin help protect people against heart
attacks? (see Figure 3.1) There are two drugs: aspirin and beta carotene.
The subjects were 21,996 male physicians. There were two factors, (aspirin
and beta carotene) each having two levels: aspirin (yes or no) and beta
carotene (yes or no). Combinations of the levels of these factors form the
four treatments shown in Figure 3.1. On odd-numberred days: subjects took
a white tablet contained either aspirin or a placebo. On even-numbered days,
they took a red capsule containing either beta carotene or a placebo. There
were several response variables-the study looked for heart attacks, several
kinds of cancer, and other medical outcomes.
    Result: after several years, 239 of the placebo group but only 139 of the
aspirin group had suffered heart attacks.
    Evidence: taking aspirin does reduce heart attacks. It did not appear,
however, that beta carotene had any effect.
    Example 3.4. Does studying a foreign language in high school increase
students’ verbal ability in English? There is the lists of all seniors in a high
school who did and did not study a foreign language. After comparison their
scores on a standard test of English reading and grammar given to all seniors:
the average score of the students who studied a foreign language is much
higher than the average score of those who did not. This observational study
gives no evidence that studying another language builds skill in English.
    ⇒ Corollary: Examples 3.3 and 3.4 illustarte the big advantage of ex-
periments over observational study. Experiments can give good evidence for
causation.
    Another advantage of the experiments is that they allow us to study the
specific factors we are intereted in, while controlling the effects of lurking
variables.
3    Comparative Experiments
A simple design with only a single teratment (which is applied to all of the
experimental units) can be outlined as:
    =⇒ Tretament −→ Observe Response
    Such simple designs often yield invalid data.
    Comparative experiment: two groups;
    1 group: 1 treatment → observe response;
    2 group: 2 treatment → observe response.
    Example 3.5. ”Gastric freezing.” The idea is that cooling the stom-
ach will reduce its production of acid and so relieve ulcers.
    Former design of experiment:
    Gastric freezing → Observe pain relief;
    Caution: The patients’ response may have been due to the placebo
effect.
    A later experiment divided ulcer patients into two groups: 1 group- gastric
freezing, other group-a placebo teratment.
    Results: 38% of the 82 patients in the 1 group improved, but so did 38%
of the 78 patients in the 2 group.
    Corollary: gastric freezing was no better than a placebo.
    Reason: the effect of the explanatory variable were confounded with the
placebo effect.
    =⇒ How to defeat confounding: comparing two groups of patients.
    =⇒ Control Group: the group of patients who received a sham treat-
ment. It enables us to control the effects of outside variables on the outcome.
    Cotrol is the first basic principle of statistical design of experiments.
    Comparison of several treatments in the same environment is the simplest
form of control.
    Without control: the result is often bias, namely, the design of a study
may systematically favors certain outcomes.
4    Randomization
=⇒ How can we assign experimental units to treatments in a way
that is fair to all of the treatments?-this is the main question of the
randomization.
    =⇒ The use of chance is the answer.
    The statistician’s remedy is to rely on chance to make an assignment
that does not depend on any characteristic of the experimental units and
that does not rely on the judgment of the experimenter in any way.
    =⇒ The use of chance to divide experimental units into groups is called
randomization.
    The disign in Figure 3.2 combines comparison and randomization to arrive
at the simplest randomized comparative design.

Example 3.6. Weight gain of 30 white rats over a 28-day period by feeding
of a new ”instant breakfast” (see Figure 3.2). There are two groups: 15 + 15.
The diet is a single factor with two levels. A control group (15): standard diet
for comparison. The experimental group (15) has been choosen randomly:
new diet. Figure 3.2 outlines the design of this experiment.
5    Randomized Comparative Experiments

The logic behind the randomized comparative design in Figure 3.2:

1. Randomization produces groups of rats that should be similar in all respects
before the teratment are apllied;

2. Comparative design ensures that influences other than the diets operate
equally on both groups;

3. Differences in average weight gain must be due either to the diets or to the
play of chance in the random assignment of rats to the diets.


Principles of Experimental Design

The basic principles of statistical design of experiments are:
1. Control the efects of lurking variables on the response, most simply
by comparing two or more teratments;

2. Randomize-use impersonal chance to assign experimental units to teratments;

3. Replicate each teratment on many units to reduce chance variation in the results.

We can use the laws of probability, which give a mathematical description of
chance behavior, to learn if the treatment effects are larger than we would
expect to see if only chance were operating. If they are, we call them statis-
tically significant.

Statistical Significance

An observed effect so large that it would rarely occur by chance
is called statistically significant.
6     How to Randomize
=⇒ The idea of randomization is to assign subjects to teratments by
drawing names from a hat.
   In practice, experimenters use software to carry out randomization (Excel,
SAS, Minitab, etc.).
   We can also randomize without software by using a table of random digits
(Table B). To make the table easier to read, the digits appear in groups of
five and in numbered rows.

Random Digits

A table of random digits is a list of the digits 0, 1, 2, ..., 8, 9
that has the following properties:

1. The digit in any position in the list has the same chance of being
any one of 0, 1, 2, ..., 8, 9;

2. The digits in different positions are independent in the sense that
the value of one has no influence on the value of any other.


=⇒ Our goal is to use random digits for experimental randomization.

Facts about Random Digits

1. Any pair of random digits has the same chance of being
any of the 100 possible pairs: 00, 01, 02, ..., 98, 99;

2. Any triple of random digits has the same chance of being
any of the 1000 posiible triples: 000, 001, 002, ..., 998, 999;

3. ...and so on for groups of four or more random digits.


    Example 3.7. Nutrition experiment of Example 3.6.
    State the problem: We must divide 30 rats at random into groups of 15
rats each.
    Solution.
    Step 1. Label: give each rat a numerical label, using as few digits as possi-
ble. Two digits are needed to label 30 rats, so we use labels 01, 02, 03, ..., 29, 30.
    Step 2. Table B: Start anywhere in Table B and read two-digit groups.
We must pick out only two digits numbers between 01 and 30. If we begin
at line 130, for example, we obtain the following result:
              05, 16, 17, 20, 19, 04, 25, 29, 18, 07, 13, 02, 23, 27, 21.
   These rats form the experimental group. The remaining 15 are the control
group.
   =⇒ Randomization requires two steps:
   1) assign labels to the experimental units, and
   2) use Table B to select labels at random.




Example 3.8. Conservation of Energy (see Figure 3.3).
    State the problem: The subjects: 60 single-family residences. Three
groups by 20 houses. The response variable: total electricity used in a year.
Three treatments: 1) meters, 2) chart, 3) no monitoring.
    Solution. How to carry out the random experiment? Label the 60 houses
01, 02, ..., 58, 59, 60. Then enter Table B and read two-digit groups until you
have selected 20 houses to receive the meters. Continue in Table B to select
20 more to receive charts. The remaining 20 form the control group.
    Completely randomized experimental design: all experimental units
are allocated at random among all teratments. (For instance, Examples 3.2
and 3.3). It can compare any number of treatments.
7    Caution about Experimentation
=⇒ The studies (Examples 3.3 and 3.5) were double-blind-neither the sub-
jects themselves nor the medical personnel who worked with them knew
which treatment any subject had received.
    =⇒ The most serious potential weakness of experiments is lack of real-
ism.
    Example 3.9. Comparison of two television advertisements by showing
TV programs to student subjects. The students know it’s ”just an experi-
ment”. That’s not a realistic setting.


8    Matched Pairs Designs
=⇒ Matching the subjects in various way can produce more precise result
than simple randomization.
    Example 3.10. Attraction of cereal leaf beetles by the color (yellow or
green): mounting sticky boards to trap insects that land on them. Design
of experiment: compare yellow and green by mounting boards on poles in a
large field of oats. We randomly select half the poles to receive a yellow board
while the remaining poles receive green. Some variation among experimental
units (locations within the field) can hide the systematic effect of the board
color.
    We use a matched pairs design in which we mount boards of both colors
on each pole. The observations (number of beetles trapped) are matched in
pairs on each pole. We compare the number of trapped beetles on a yellow
board with the number trapped by the green board on the same pole. We
select the color of the top board at random (just toss a coin for each board
or read odd and even digits from Table B).
    Matched pairs design compare two treatments. We choose blocks of two
units. We randomize the order of teratment for each subject, by a coin toss,
since the order of the treatments can influence the subject’s response.
9    Block Designs
The matched pairs design reduces the effect of variation among locations in
the field by comparing the pair of boards at each location.
   Matched pairs are an example of block designs.
   =⇒ Idea of block design: creating equivalent treatment groups by
matching with the principle of forming treatment groups at random.
    Block Design

    A block is a group of experimental units or subjects that are known
    before the experiment to be similar in some way that is expected to
    affect the response to the treatment.

    In a block design, the random assignment of units to teratments
    is carried out separately within each block.

   Block design can have blocks of any size.
   Blocks are another form of control: they control the effects of some outside
variables by bringing those variables into the experiment to form the blocks.
   A typical example of block designs is below.

   Example 3.11. The progress of a type of cancer differs in women
and men (see Figure 3.4). A clinical experiment: three therapies, sex as
a blocking variable, two separate randomizations are done ( for female and
male).
10     Summary

1. In an experiment, one or more treatments are imposed on the experi-
mental units or subjects. Each treatment is a combination of levels of the
explanatory variables, which we call factors.

2. The design of an experiment refers to the choice of treatments and the
manner in which the experimental units or subjects are assigned to the treat-
ments. The basic principles of statistical design of experiments are control,
randomnization, and replication.

3. The simplest form of control is comparison. Experiments should compare
two or more treatments in order to prevent confounding the effect of a
treatment with other influences, such as lurking variables.

4. Randomnization uses chance to assign subject to the treatments. Ran-
domnization creates treatment groups that are similar (except for chance
variation) before the treatment are applied. Randomnization and compari-
zon together prevent bias, or systematic favoritism, in experiments.

5. You can carry out randomnization by giving numerical labels to the ex-
perimental units and using a table of random digits to choose treatment
groups.

6. Replication of the treatments on many units reduces to the role of chance
variation and makes the experiment more sensitive to differences among the
treatments.

7. Good experiments require attention to detail as well as good statistical
design. Many behavioural and medical experiments are double-blind. Lack
of realism in an experiment can prevent us from generalizing its results.

8. In addition to comparison, a second form of control is to restrict ran-
domization by forming blocks of experimental units that are similar in some
way that is important to the response. Randomization is then carried out
separately within each block.

9. Matched pairs are a common form of blocking for comparing just two
treatments. In some matched pairs designs, each subject receives both treat-
ments in a random order. In others, the subjects are matched in pairs as
closely as possible, and one subject in each pair receives each treatment.

								
To top