aboutstat.blogspot.com_Optimality and constructions of incomplete split-block designs by alpd03l

VIEWS: 12 PAGES: 23

									                                 Journal of Statistical Planning and
                                   Inference 106 (2002) 135 – 157
                                                                              www.elsevier.com/locate/jspi




         Optimality and constructions of incomplete
                     split-block designs
        Kazuhiro Ozawaa; ∗ , Masakazu Jimbob , Sanpei Kageyamac ,
                             Stanis law Mejzad
      a Department
                of Nursing, Gifu College of Nursing, 3047-1 Egira Hashima, Gifu 501-6295, Japan
                b Department
                          of Mathematics, Keio University, Yokohama 223-8522, Japan
       c Department of Mathematics, Hiroshima University, Higashi-Hiroshima 739-8524, Japan
  d Department of Mathematical and Statistical Methods, Agricultural University of Poznaà , Poland
                                                                                        n

                           Received 18 December 1998; accepted 29 June 2000

                               To the memory of Professor Sumiyasu Yamamoto



Abstract
  A su cient condition for an incomplete split-block design to be universally optimal is given.
Optimal properties are examined under two linear models, i.e., with interaction e ects and with-
out these e ects. Furthermore, some methods of constructing universally optimal incomplete
split-block designs are presented. c 2002 Elsevier Science B.V. All rights reserved.

MSC: 05B05; 62K10

Keywords: Split-block design; Universally optimum; Balanced incomplete block design; Balanced incomplete
split-block design




1. Introduction

  Many two-factorial experiments are laid out in the designs with split-units. This
type of experiment is especially useful in agricultural (ÿeld) experiments. A split-plot
design and a split-block design are two types of designs with split-units that are of
great interest from a theoretical point of view as well as practical applications. In many
practical situations the experimental material is limited and then only some kind of

  ∗ Corresponding author.
    E-mail addresses: ozawa@gifu-cn.ac.jp (K. Ozawa), jimbo@math.keio.ac.jp (M. Jimbo), ksanpei@sed.
hiroshima-u.ac.jp (S. Kageyama), smejza@owl.au.poznan.pl (S. Mejza).

0378-3758/02/$ - see front matter c 2002 Elsevier Science B.V. All rights reserved.
PII: S 0 3 7 8 - 3 7 5 8 ( 0 2 ) 0 0 2 0 9 - 4
136       K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157




                                                Fig. 1. Experiment in blocks.



incomplete (non-orthogonal) designs with split-units can be applied. Incomplete split-
plot designs have been discussed in the literature, e.g., Bhargava and Shah (1975),
Mejza and Mejza (1996), Rees (1969) and Robinson (1970). On the other hand, some
problems connected with planning and analysis of experiments carried out in an incom-
plete split-block design were considered by Hering and Mejza (1997). In a split-block
design, we consider three types of randomizations, that is, within rows, within columns
and within blocks. By combining three error terms into one, we take into account linear
models with correlations between units.
   In this paper, optimal properties of the incomplete split-block design are examined
under two linear models of observations, i.e., with or without interaction e ects. More-
over, some constructions that lead to universal optimum designs are given.
   Let us consider a two-factorial experiment in which the ÿrst factor A occurs on v1
levels (treatments) A1 ; : : : ; Av1 and the second factor C occurs on v2 levels (treatments)
as C1 ; : : : ; Cv2 .
   In Fig. 1, the split-block design is a set of b blocks each of which is a k1 ×k2 matrix.
Assume that the treatments of the factor A occur on the rows and the treatments of
the factor C occur on the columns. Let Ai(t; r) denote a treatment Ai of the factor A
applied to the rth row of a block Bt , while Cj(t; c) denotes a treatment Cj of the factor
C applied to the cth column of a block Bt . Furthermore, the (t; r; c)-plot means the
(r; c)th plot in a block Bt . Then Bt is represented by
      Bt = {Ai(t; 1) ; : : : ; Ai(t; k1 ) | Cj(t; 1) ; : : : ; Cj(t; k2 ) }
(see Fig. 2). For convenience, this is simply denoted by
      Bt = {i(t; 1); : : : ; i(t; k1 ) | j(t; 1); : : : ; j(t; k2 )}:
   Let V1 = {1; : : : ; v1 } be the set of indices of v1 treatments on A, V2 = {1; : : : ; v2 } be
another set of indices of v2 treatments on C and B = {B1 ; : : : ; Bb } be a collection of
(k1 + k2 )-subsets, called superblocks, of V1 ∪ V2 . If each superblock is divided into a
k1 -subset of V1 and a k2 -subset of V2 , then a binary design (V1 ; V2 ; B) is called an
incomplete split-block design.
   In this paper, the following two linear models for an observation ytrc of the (t; r; c)-
plot will be considered.
   (Model I) ytrc = i(t; r) + j(t; c) + ÿt + trc ,
   (Model II) ytrc = i(t; r) + j(t; c) + ( )i(t; r) j(t; c) + ÿt + trc ,
            K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157   137




                                              Fig. 2. Treatments in block Bt .



  where      denotes the error of the (t; r; c)-plot, E(
              trc                                                          trc )   = 0 and
                             
                              1 if t = t ; r = r
                             
                                                                             and c = c ;
                             
                              1 if t = t ; r = r                            and c = c ;
                             
                           2
      Cov( trc ; t r c ) = ×     2   if t = t ; r = r                        and c = c ;           (1)
                             
                              3 if t = t ; r = r
                             
                                                                            and c = c ;
                             
                                0 otherwise:
Furthermore, i is a main e ect of Ai ; j is a main e ect of Cj , ( )ij is an interaction
e ect between Ai and Cj , and ÿt is a block e ect of Bt . Without loss of generality, it
can be assumed that
       v1                  v2

               i = 0;             j    = 0;
       i=1                j=1
       v1                         v2
             ( )ij = 0;                 ( )ij = 0:                                                 (2)
       i=1                       j=1

A lexicographical order of observations is used, i.e., a (t; r; c)-plot has the number
k1 k2 (t − 1) + k2 (r − 1) + c and the interaction e ect ( )ij has the number (i − 1)v2 + j.
   Four matrices X1 = (x1 (t; r; c; i)), X2 = (x2 (t; r; c; j)), X12 = (x12 (t; r; c; i; j)) and
X3 = (x3 (t; r; c; t )) are deÿned as follows:

                                1 if a treatment Ai is applied to a (t; r; c)-plot;
      x1 (t; r; c; i) =
                                0 otherwise;
                                1 if a treatment Cj is applied to a (t; r; c)-plot;
      x2 (t; r; c; j) =
                                0 otherwise;
                                  1 if treatments Ai and Cj are applied to a (t; r; c)-plot;
      x12 (t; r; c; i; j) =
                                  0 otherwise;
                                1 if t = t ;
      x3 (t; r; c; t ) =                                                                           (3)
                                0 otherwise:
138       K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


Here X1 ; X2 ; X12 and X3 are of sizes bk1 k2 × v1 , bk1 k2 × v2 , bk1 k2 × v1 v2 and bk1 k2 × b,
respectively.
   Models I and II can be rewritten as follows:
   (Model I) y = X1 + X2 + X3 ÿ + ; E( ) = 0; Cov( ) = ,
   (Model II) y = X1 + X2 + X12 ( ) + X3 ÿ + ; E( ) = 0; Cov( ) = ,
   where y = (y1 ; : : : ; ybk1 k2 ) , = ( 1 ; : : : ; v1 ) , = ( 1 ; : : : ; v2 ) , ( ) = (( )11 ; : : :,
( )v1 v2 ) , ÿ = (ÿ1 ; : : : ; ÿb ) , = ( 1 ; : : : ; bk1 k2 ) (x denotes the transpose of x), and
is the covariance matrix of given by (1). Throughout this paper, is assumed to be
positive deÿnite.

Example 1.1. Suppose that there are three blocks of size 2 × 3 and the ÿrst factor A
has three treatments A1 ; A2 ; A3 ; while the second factor C has four treatments C1 ; C2 ;
C3 ; C4 .




  For the design, the matrices X1 , X2 ,        X3 and X12 are given as follows:
                                                                          
             1 0 0                   1           0 0 0                1 0 0
           1 0 0                0             1 0 0             1 0 0
                                                                          
                                                                          
           1 0 0                0             0 1 0             1 0 0
                                                                          
           0 1 0                1             0 0 0             1 0 0
                                                                          
                                                                          
           0 1 0                0             1 0 0             1 0 0
                                                                          
           0 1 0                0             0 1 0             1 0 0
                                                                          
                                                                          
           1 0 0                1             0 0 0             0 1 0
                                                                          
           1 0 0                0             0 1 0             0 1 0
                                                                          
                                                                          
           1 0 0                0             0 0 1             0 1 0
     X1 = 0 0 1
                       ; X2 = 
                                  1
                                                         ; X3 = 
                                                                    0 1 0;
                                                                               
                                              0 0 0                      
                                                                          
           0 0 1                0             0 1 0             0 1 0
                                                                          
           0 0 1                0             0 0 1             0 1 0
                                                                          
                                                                          
           0 1 0                0             1 0 0             0 0 1
                                                                          
           0 1 0                0             0 1 0             0 0 1
                                                                          
                                                                          
           0 1 0                0             0 0 1             0 0 1
                                                                          
           0 0 1                0             1 0 0             0 0 1
                                                                          
                                                                          
           0 0 1                0             0 1 0             0 0 1
                 0   0   1                  0    0   0    1                  0   0   1
          K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157   139

                                                                  
              1       0   0     0    0   0   0   0   0   0   0   0
            0        1   0     0    0   0   0   0   0   0   0   0
                                                                  
                                                                  
            0        0   1     0    0   0   0   0   0   0   0   0
                                                                  
            0        0   0     0    1   0   0   0   0   0   0   0
                                                                  
                                                                  
            0        0   0     0    0   1   0   0   0   0   0   0
                                                                  
            0        0   0     0    0   0   1   0   0   0   0   0
                                                                  
                                                                  
            1        0   0     0    0   0   0   0   0   0   0   0
                                                                  
            0        0   1     0    0   0   0   0   0   0   0   0
                                                                  
                                                                  
            0        0   0     1    0   0   0   0   0   0   0   0
      X12 = 
            0
                                                                   :
                     0   0     0    0   0   0   0   1   0   0   0
                                                                  
            0        0   0     0    0   0   0   0   0   0   1   0
                                                                  
            0        0   0     0    0   0   0   0   0   0   0   1
                                                                  
                                                                  
            0        0   0     0    0   1   0   0   0   0   0   0
                                                                  
            0        0   0     0    0   0   1   0   0   0   0   0
                                                                  
                                                                  
            0        0   0     0    0   0   0   1   0   0   0   0
                                                                  
            0        0   0     0    0   0   0   0   0   1   0   0
                                                                  
                                                                  
            0        0   0     0    0   0   0   0   0   0   1   0
                  0   0   0     0    0   0   0   0   0   0   0   1


2. Estimation of main e ects under Model I without interaction e ects

  We consider some combinatorial property of optimal designs for estimating main
e ects under Model I (without interaction e ects).
  For the design matrices X1 ; X2 and X3 , let
      X = (X1 : X2 : X3 )
and let
             
           
      Â =  ;
           ÿ
where ; are the main e ects and ÿ is the blocks e ects. Then Model I can be
represented by
      y = XÂ + :
                                               ˆ
The generalized least squares estimator (GLSE)  of  is given by a solution of the
normal equation
            −1             −1
      X          XÂ = X         y;
140          K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


that is, the GLSEs of ;                         and ÿ are obtained as solutions of the following equations:
       
        X1
              −1
                 X1 ˆ + X 1                    −1
                                                    X2 ˆ + X 1              −1      ˆ
                                                                                 X3 ÿ = X 1   −1
                                                                                                   y;
       
       
           X −1 X1 ˆ + X2                      −1
                                                    X2 ˆ + X 2              −1      ˆ
                                                                                 X3 ÿ = X 2   −1
                                                                                                   y;                                         (4)
        2
       
                                                                                   ˆ
           X3 −1 X1 ˆ + X3                     −1
                                                    X2 ˆ + X 3              −1
                                                                                 X3 ÿ = X 3   −1
                                                                                                   y:

Here,        is expressed as
                 2              2
             =           =          {(1 −       1   −     2   +        3 )Ibk1 k2   +(   1    −   3 )R      +(    2   −   3 )C   +   3 RC};

where I! is the identity matrix of order !, R = Ib ⊗ Ik1 ⊗ Jk2 and C = Ib ⊗ Jk1 ⊗ Ik2 .
Here ⊗ denotes the Kronecker product and J! denotes an ! × ! matrix with all its
elements one. Two matrices I! and J! are simply denoted by I and J in most places.
For any 1 6 t 6 b; 2 6 i 6 k1 and 2 6 j 6 k2 ; it follows that
         (t)  (1)    (1)   ( j)    (t)  (i)     (1)   ( j)
        eb ⊗ ek1 ⊗ (ek2 − ek2 ) + eb ⊗ ek1 ⊗ (−ek2 + ek2 );

         (t)    (1)   (i)                                      (t)          (1)   ( j)                            (t)
        eb ⊗ (−ek1 + ek1 ) ⊗ 1k2 ;                            eb ⊗ 1k1 ⊗ (−ek2 + ek2 );                          eb ⊗ 1k1 ⊗ 1k2

are eigenvectors of                    each corresponding to eigenvalues

        Á0 = 1 −            1   −     2   +    3    ¿ 0;


        Á1 = 1 −            2   + (k2 − 1)(           1   −        3 ) ¿ 0;                                                                   (5)


        Á2 = 1 −            1   + (k1 − 1)(           2   −        3 ) ¿ 0;                                                                   (6)


        Á3 = 1 + (k2 − 1)                 1   + (k1 − 1)               2    + (k1 − 1)(k2 − 1)              3   ¿ 0;
                                                              (i)
where 1m is an m-dimensional all-one column vector and en is an n-dimensional
column vector such that the ith element equals 1 while others are all 0. Recall that
is positive deÿnite.
   Some calculation can show that

         −1          1          −1        1
                 =      2
                                     =    2
                                               { 0I +          1R      +     2C    +   3 RC};


where
                 1                            0( 1   −        3)                       0( 2   −    3)
         0   =      ;           1   =−                             ;        2   =−                      ;
                 Á0                                 Á1                                       Á2

                     1{ 2           + (k2 − 1) 3 } +                   2{ 1      + (k1 − 1) 3 } +           0 3
         3   =−                                                                                                   :
                                                                       Á3
             K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157             141


Then,
                    1                             1
        Á1 =                      ;    Á2 =                    :
                 0 + k2       1                0 + k1      2

Furthermore, it is easy to check that
        RC = CR = X3 X3 = Ib ⊗ Jk1 ⊗ Jk2 ;                                                                    (7)

        RX1 = k2 X1 ;                                                                                         (8)

        CX2 = k1 X2 ;                                                                                         (9)

        RX3 = k2 X3 ;

        CX3 = k1 X3 ;

          −1
               RC = RC;                                                                                      (10)

        X 3 X 3 = k 1 k2 I
hold, where           =   0   + k2      1   + k1   2   + k1 k2 3 . Therefore, it holds that
               −1
        X1          X2 = X1 X2 (= K = (kij ); say);                                                          (11)
                −1
        X1           X3 = X1 X3 (= k2 M = k2 (mit ); say);
                −1
        X2           X3 = X2 X3 (= k1 N = k1 (njt ); say);
                −1
        X3           X3 = k1 k2 I;
where kij is the number of blocks corresponding to both Ai and Cj , mit is the number of
rows corresponding to Ai in a block Bt , and njt is the number of columns corresponding
to Cj in a block Bt . Thus (4) is expressed as follows:
      
       X1
               −1                    ˆ
                  X1 ˆ + K ˆ + k2 M ÿ = X1 −1 y;
      
                                     ˆ
           K ˆ + X2 −1 X2 ˆ + k1 N ÿ = X2 −1 y;                                    (12)
      
      
           k2 M ˆ + k1 N ˆ + k1 k2 ÿˆ = X3 −1 y:

      ˆ
Hence ÿ is given by
        ˆ        1                −1
        ÿ=            (X               y − k2 M ˆ − k1 N ˆ):
                k 1 k2 3
Then it follows from the ÿrst equation of (12) that
                 −1               k2                                               −1       1        −1
           X1         X1 −           MM            ˆ + ( K − MN ) ˆ = X1                −      MX3        y: (13)
                                  k1                                                        k1
142       K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


Here, by (7) – (9), (10) and (11), the coe cient matrix of ˆ in (13) is further
reduced to
                                1
       K − MN = X1 −1 X2 −           X −1 X3 X3 −1 X2
                               k 1 k2 1
                      = X 1 X2 −            X1 RCX2
                                    k1 k2
                      = X 1 X 2 − X 1 X2
                      = O:                                                                       (14)
Therefore the normal equation (13) of ˆ is written as follows:
                    k2                        1
        X1 −1 X1 −     MM ˆ = X1 −1 − MX3 −1 y:                                                  (15)
                    k1                        k1
Now for a design matrix X = (X1 : X2 : X3 ) of a split-block design, the C-matrix is
deÿned by
                         k2
     C (X ) = X1 −1 X1 −    MM :
                         k1
It is easy to see that C (X )1v1 = 0 and rank(C (X )) 6 v1 − 1. Now assume that
rank(C (X )) = v1 − 1. Let C − (X ) be a g-inverse of the C-matrix C (X ) such that
C − (X )C (X )C − (X ) = C − (X ). By (15),
                                1
       ˆ = C − (X ) X1 −1 − MX3 −1 y:
                               k1
Moreover, it is obvious that Cov( ˆ) = 2 C − (X ) holds. Similarly, assuming that
rank(C (X )) = v2 − 1, ˆ is given by
                              1
      ˆ = C − (X ) X2 −1 − NX3 −1 y;
                              k2
where
                       −1          k1
        C (X ) = X2         X2 −      NN
                                   k2
and C − (X ) is a g-inverse of the C-matrix C (X ) such that C − (X )C (X )C − (X ) =
C − (X ). Then Cov( ˆ) = 2 C − (X ).

2.1. Criteria of optimality

   Denote by      the set of design matrices X in which all elementary contrasts of
main e ects are estimable by GLSE, that is, rank(C (X )) = v1 − 1, and M is the
set of C (X ) for any X ∈ . Let M be the convex hull of M and         be the set of
functions on M such that

  (i)    is convex,
 (ii) (C) ¿ (gC) for C ∈ M and for any g ¿ 1, and
(iii) for any permutation matrix P and C ∈ M , (P CP) = (C) holds.
         K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157          143


The design matrix X ∗ is said to be universally optimum relative to                               if
        (C (X ∗ )) = min          (C (X ))
                         X∈

holds for all functions ∈ satisfying (i) – (iii).
  It is known that the universal optimum criterion includes those of A-, D- and
E-optimality as its special cases.

Proposition 2.1 (Kiefer, 1975). The design matrix X ∗ ∈         is universally optimum
relative to   if C (X ∗ ) is completely symmetric and tr(C (X ∗ ))=maxX ∈ tr(C (X ));
where tr(C) is the trace of C.

2.2. Universally optimum designs for GLSE under Model I

  For an incomplete split-block design (V1 ; V2 ; B), let
      B1 = {Bt ∩ V1 | Bt ∈ B}               and       B2 = {Bt ∩ V2 | Bt ∈ B}:
For X = (X1 : X2 : X3 ) of the incomplete split-block design (V1 ; V2 ; B), the incidence
matrix M of a binary design (V1 ; B1 ) is given by
            1
      M=      X X3 :
            k2 1
             −2
If MM (=k2 X1 X3 X3 X1 )=(r1 − 1 )I + 1 J holds, where r1 is the number of replications
for each treatment in V1 , then a binary design (V1 ; B1 ) is called a balanced incomplete
block design, denoted by BIBD(v1 ; k1 ; 1 ), and then X1 X1 = k2 r1 I . Now assume that
(V1 ; B1 ) is a BIBD(v1 ; k1 ; 1 ) and let X ∗ = (X1∗ : X2∗ : X3∗ ) be a design matrix of the
incomplete split-block design (V1 ; V2 ; B). By (8), it holds that
      X1∗   −1
                 X1∗ = X1∗ ( 0 I +          1R   +    2C    +          ∗
                                                                 3 RC)X1
                         k 2 r1                         ∗
                     =          I +(    2   +    3 k2 )X1       CX1∗
                          Á1
and
                 1 ∗ ∗ ∗ ∗   1
      MM =          X X X X = X1∗ CX1∗ :
                 k2 1 3 3 1
                  2          k2

Therefore, by X1∗ CX1∗ = k2 {(r1 − 1 )I + 1 J } and by the relation of a BIBD,
                               ∗
 1 (v1 − 1) = r1 (k1 − 1), C (X ) can be rewritten as
                     k2 r1
      C (X ∗ ) =           I +(     2   +          ∗
                                            3 k2 )X1    CX1∗ −              X1∗ CX1∗
                      Á1                                               k1
                     k2 r1
                 =         I + k2       2   +    3 k2   −         {(r1 −        1 )I   +   1J }
                      Á1                                    k1
                     k2 v1 1            1
                 =             I−          J      ;
                      k1 Á 1            v1
144      K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


which implies that C (X ∗ ) is completely symmetric. While, for any design matrix
X = (X1 : X2 : X3 ), it is easily checked that
                      bk2 (k1 − 1)
     tr(C (X )) =                  = constant;
                           Á1
which has the same value as tr(C (X ∗ )) = k2 1 v1 (v1 − 1)=(k1 Á1 ), since 1 v1 (v1 − 1) =
bk1 (k1 − 1) holds. Thus, by Proposition 2.1, X ∗ is universally optimum relative to .
   The result with respect to can be obtained similarly. Thus Proposition 2.1 has
established the following:

Theorem 2.1. (i) Suppose ={X | rank(C (X ))=v1 −1} and (V1 ; V2 ; B) is an incom-
plete split-block design with a design matrix X ∗ ∈ . If (V1 ; B1 ) is a BIBD(v1 ; k1 ; 1 );
then X ∗ is universally optimum relative to .
   (ii) Suppose      = {X | rank(C (X )) = v2 − 1} and (V1 ; V2 ; B) is an incomplete
split-block design with a design matrix X ∗ ∈ . If (V2 ; B2 ) is a BIBD(v2 ; k2 ; 2 ),
then X ∗ is universally optimum relative to .

  In the considerations above the BIB structure of the association matrix MM (NN )
was used. It is also possible to consider a wider class of block designs satisfying the
property that MM (or NN ) = (r − )I + J . As an example of such a class let us
consider a completely (orthogonal) randomized block design, i.e., in which r = . Then
immediately Theorem 2.1 yields the following:

Corollary 2.1. (i) Suppose (V1 ; V2 ; B) is an incomplete split-block design with a
design matrix X ∗ ∈ . If (V1 ; B1 ) is a completely randomized block design for row
treatments; then X ∗ is universally optimum relative to .
   (ii) Suppose (V1 ; V2 ; B) is an incomplete split-block design with a design matrix
X ∗ ∈ . If (V2 ; B2 ) is a completely randomized block design for column treatments,
then X ∗ is universally optimum relative to .

   The above corollary is rather obvious but to have consideration complete it is worth
stating, since such designs are often used in a practical setting (see Hering and Mejza,
1997).

3. Estimation of interaction e ects under Model II

   A combinatorial property of optimal designs will be considered for estimating inter-
action e ects under Model II (with interaction e ects).
   Assume that (V1 ; B1 ) and (V2 ; B2 ) are BIBD(v1 ; k1 ; 1 ) and BIBD(v2 ; k2 ; 2 ), respec-
tively. The normal equation under Model II is given as follows:
       
                                                                    ˆ
        X1 −1 X1 ˆ + X1 −1 X2 ˆ + X1 −1 X12 ( ) + X1 −1 X3 ÿ = X1 −1 y;
       
       
       
        X −1 X ˆ + X −1 X ˆ + X −1 X ( ) + X −1 X ÿ = X −1 y;      ˆ
          2     1        2     2       2      12          2       3     2
        X −1 X1 ˆ + X −1 X2 ˆ + X −1 X12 ( ) + X −1 X3 ÿ = X −1 y;
        12                                                           ˆ
       
                         12             12                 12            12
                                                                   ˆ
         X3 −1 X1 ˆ + X3 −1 X2 ˆ + X3 −1 X12 ( ) + X3 −1 X3 ÿ = X3 −1 y:
           K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                        145


In a manner similar to the argument under Model I, this equation can be repre-
sented by
      
                                                   ˆ
       X1 −1 X1 ˆ + K ˆ + X1 −1 X12 ( ) + k2 M ÿ = X1 −1 y;
      
      
      
       K ˆ + X −1 X ˆ + X −1 X ( ) + k N ÿ = X −1 y;
                                                    ˆ
                  2     2     2     12         1         2
                                                                          (16)
       X −1 X1 ˆ + X −1 X2 ˆ + X −1 X12 ( ) + X −1 X3 ÿ = X −1 y;
       12                                                     ˆ
      
                      12            12                12        12
                                                 ˆ
          k2 M ˆ + k1 N ˆ + X3 −1 X12 ( ) + k1 k2 ÿ = X3 −1 y:

     ˆ
Then ÿ is obtained by

      ˆ        1               −1                                           −1
      ÿ=            {X              y − k2 M ˆ − k1 N ˆ − X3                     X12 ( )}:                            (17)
              k 1 k2 3
Hence the ÿrst equation of (16) is represented by

               −1             k2
        X1          X1 −         MM           ˆ + ( K − MN ) ˆ
                              k1
                         −1           1             −1                             −1        1            −1
            + X1              X12 −      MX3             X12 ( ) = X1                   −       MX3            y;
                                      k1                                                     k1

which, by (14), implies

               −1             k2                             −1           1             −1
        X1          X1 −         MM           ˆ + X1              X12 −      MX3             X12 ( )
                              k1                                          k1
                         −1        1           −1
            = X1              −       MX3              y:                                                             (18)
                                   k1

Similarly, by (17), the second equation of (16) is shown to be

               −1             k1                             −1           1           −1
        X2          X2 −         NN           ˆ + X2              X12 −      NX3           X12 ( )
                              k2                                          k2
                         −1        1          −1
            = X2              −       NX3              y:                                                             (19)
                                   k2

Since (V1 ; B1 ) is a BIBD(v1 ; k1 ;            1 ),   the coe cient matrix of ˆ in (18) is reduced to

             −1           k2      k 2 v1 1 ( 0 + k 2 1 )                       1               k 2 v1 1        1
      X1          X1 −       MM =                                         I−      J        =              I−      J
                          k1                k1                                 v1               k1 Á1          v1

which, through (18), implies
              k1 Á1                  −1       1             −1
       ˆ=                     X1          −      MX3               y
             k2 1 v 1                         k1
                         −1               1            −1
             − X1             X12 −          MX3            X12 ( ) :                                                 (20)
                                          k1
146        K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


Similarly, (19) leads to
              k 2 Á2                −1        1              −1
      ˆ=                     X2           −      NX3               y
             k 1 2 v2                         k2
                          −1             1              −1
             − X2              X12 −        NX3              X12 ( ) :                                                      (21)
                                         k2
For a design matrix X = (X1 : X2 : X12 : X3 ) of a split-block design such that (Vi ; Bi ) are
BIBDs, using (16) and (17) it follows from (20) and (21) that
      C(    ) (X )(      ) = F(X )y;
where the C-matrix C(               ) (X )    is given by
                                    −1               1             −1                −1
      C(    ) (X ) =      X12            X12 −           X              X3 X 3            X12
                                                    k1 k2 12
                             k1 Á1                 −1             1           −1
                        −                 X12           X1 −        X              X3 M
                            k 2 1 v1                              k1 12
                                     −1             1              −1
                        × X1              X12 −        MX3              X12
                                                    k1
                             k2 Á2                 −1             1           −1
                        −                 X12           X2 −        X              X3 N
                            k 1 2 v2                              k2 12
                                     −1             1              −1
                        × X2              X12 −        NX3              X12                                                 (22)
                                                    k2
and
                               −1          1             −1              −1
      F(X ) = X12                   −          X              X3 X 3
                                          k1 k2 12
                         k1 Á1                −1             1          −1                       −1       1        −1
                  −                  X12           X1 −        X              X3 M          X1        −      MX3
                        k 2 1 v1                             k1 12                                        k1
                         k2 Á2                −1             1          −1                       −1       1        −1
                  −                  X12           X2 −        X              X3 N          X2        −      NX3        :
                        k 1 2 v2                             k2 12                                        k2
  It is easily checked that
                             ( j)                               (i)
      C(    ) (X )(1v1    ⊗ ev2 ) = 0;             C(   ) (X )(ev1      ⊗ 1v 2 ) = 0
for any i and j, and rank(C( ) (X )) 6 (v1 − 1)(v2 − 1). Hence by (2), if
rank(C( ) (X )) = (v1 − 1)(v2 − 1), then an estimator of the elementary contrasts of
the interaction e ects ( ) can be obtained.

3.1. Criteria of optimality

   The notion of universal optimality for the estimation of interaction e ects will be
introduced. Let Si be the set of all permutation matrices of order vi for i = 1; 2. Further
let P = {P1 ⊗ P2 | P1 ∈ S1 ; P2 ∈ S2 }.
           K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                                   147


Deÿnition 3.1. Suppose that ˜ ( ) is the set of design matrices X in which all elemen-
tary contrasts of interaction e ects    are estimable by GLSE. Let ( ) be any subset
of  ˜ ( ) and M( ) be the set of C( ) (X ) for any X ∈ ( ) . Moreover; let M( ) be
the convex hull of M( ) and        be the set of functions on M( ) such that

  (i)    is convex;
 (ii) (C) ¿ (gC) for C ∈ M( ) and for any g ¿ 1; and
(iii) for any permutation matrix P ∈ P and C ∈ M( ) ; (P CP) = (C) holds.

The design matrix X ∗ is said to be universally optimum relative to                                                   ( )   if
                        ∗
        (C(    ) (X         )) = min                 (C(        ) (X ))
                                 X∈      (     )


holds for all functions               ∈             satisfying (i) – (iii).

Theorem 3.1. The design matrix X ∗ ∈ ( ) is universally optimum relative to ( ) if
the following two conditions hold.
  (i) C( ) (X ∗ ) is a v1 v2 × v1 v2 matrix which consists of v1 sub-matrices C(ij ) (X ∗ )
                                                               2

of order v2 (i; j = 1; : : : ; v1 ); i.e.;

      C(    ) (X
                   ∗
                       ) = (C(ij ) (X ∗ ));

                               pI + qJ                if i = j;
      C(ij ) (X ∗ ) =
                               rI + sJ                if i = j;

  where p = v2 (v1 − 1)s; q = −(v1 − 1)s; r = −v2 s and s is a positive integer.
  (ii) tr(C( ) (X ∗ )) = maxX ∈ ( ) tr(C( ) (X )).

Proof. Suppose that X ∗ satisÿes conditions (i) and (ii); and that X ∗∗ is universally
optimum. It is obvious that
                        ∗∗                                 ∗
        (C(    ) (X          )) 6 (C(               ) (X       )):                                                               (23)

Let
                   ∗∗            1                                                          ∗∗
      C(    ) (X        )=                                       (P1 ⊗ P2 ) C(       ) (X        )(P1 ⊗ P2 );
                              v1 !v2 !
                                             P1 ∈S1 P2 ∈S2

then C ( ) (X ∗∗ ) satisÿes (i). Hence; C ( ) (X ∗∗ ) is a scalar multiple of C( ) (X ∗ ); that
is; C( ) (X ∗ ) = gC ( ) (X ∗∗ ) for some g. By (ii) and the deÿnition of C ( ) (X ∗∗ );
                        ∗                                 ∗∗                        ∗∗              1             ∗
      tr(C(    ) (X         )) ¿ tr(C(             ) (X        )) = tr(C (   ) (X        )) = tr      C(   ) (X       )
                                                                                                    g
holds; which implies that g ¿ 1. Thus; by Deÿnition 3.1(ii);
                        ∗∗                                 ∗
        (C (   ) (X          )) ¿ (C(               ) (X       ))                                                                (24)
148       K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


holds. Furthermore; by (i) and (iii) of Deÿnition 3.1; we have

                       ∗∗             1                                                        ∗∗
         (C (   ) (X        )) 6                                  ((P1 ⊗ P2 ) C(        ) (X        )(P1 ⊗ P2 ))
                                   v1 !v2 !
                                              P1 ∈S1 P2 ∈S2
                                                      ∗∗
                              = (C(           ) (X         )):

Since X ∗∗ is universally optimum; it must hold that
                       ∗∗                            ∗∗
         (C(    ) (X        )) = (C (         ) (X        )):                                                               (25)

Therefore; by (23) – (25); it follows that
                       ∗∗                            ∗
         (C(    ) (X        )) = (C(       ) (X          ))

and for estimating             ; the design matrix X ∗ ∈                      ( )   is universally optimum.

3.2. Combinatorial properties of optimal designs for estimating interaction e ects

   Let (V1 ; V2 ; B) be an incomplete split-block design and further let

  (i)   (i; j) be the number of superblocks including treatments Ai and Cj ,
 (ii)   (i; i ; j) be the number of superblocks including treatments Ai , Ai and Cj ,
(iii)   (i; j; j ) be the number of superblocks including treatments Ai , Cj and Cj ,
(iv)    (i; i ; j; j ) be the number of superblocks including treatments Ai , Ai , Cj and Cj .

It is easy to see that
                           1
         (i; j; j ) =                         (i; i ; j; j );                                                               (26)
                        k1 − 1
                                    i =i


                           1
         (i; i ; j) =                      (i; i ; j; j );                                                                  (27)
                        k2 − 1
                                   j =j


                        1                                        1
         (i; j) =                      (i; i ; j) =                           (i; j; j ):                                   (28)
                     k1 − 1                                   k2 − 1
                               i =i                                    j =j


  Now, to give the main theorem of this section, three lemmas on the C-matrix are
given.

Lemma 3.1. Let 0 = 0 + 1 + 2 + 3 ; 1 = 1 + 3 and 2 =                                                 2   +   3.   Then the terms
in the expanded form of (22) are represented as follows:

                −1
 (i) tr(X12          X12 ) = 0 k1 k2 b.
                −1
(ii) tr(X12          X3 X3 −1 X12 ) =             2
                                                      k1 k2 b.
          K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                                                                 149


                                                                                       v1        v2     v1
                          −1                             −1
(iii)      tr(X12               X1 X1                         X12 ) =                                         [        ii    (i; j){     0    + (k2 − 1) 1 }
                                                                                    i=1 j=1 i =1

                                                                                   + (1 −              ii    ) (i; i ; j){          2    + (k2 − 1) 3 }]2 ;

        where    ii   is the Kronecker delta.
                                                                                    v1           v2     v2
                          −1                         −1
(iv)       tr(X12               X2 X 2                        X12 ) =                                         [        jj    (i; j){      0   + (k1 − 1) 2 }
                                                                                   i=1 j=1 j =1

                                                                                   + (1 −              jj    ) (i; j; j ){           1    + (k1 − 1) 3 }]2 :


(v)         tr(( k2 )−1 X12                     −1
                                                         X3 X 3               −1
                                                                                    X1 X 1            −1
                                                                                                            X12 )
                           v1            v2         v1
                 =                                        [        ii        (i; j)2 {           0    + (k2 − 1) 1 }
                          i=1 j=1 i =1

                      + (1 −                   ii    ) (i; i ; j)2 {                    2       + (k2 − 1) 3 }]:


(vi)       tr(( k1 )−1 X12                      −1
                                                         X3 X 3               −1
                                                                                   X2 X 2             −1
                                                                                                            X12 )
                          v1         v2             v2
                 =                                        [    jj            (i; j)2 {           0    + (k1 − 1) 2 }
                          i=1 j=1 j =1

                      + (1 −                   jj    ) (i; j; j )2 {                        1   + (k1 − 1) 3 }]:


(vii)       tr(( k2 )−2 X12                         −1
                                                          X3 X 3                  −1
                                                                                       X1 X 1          −1
                                                                                                             X3 X 3          −1
                                                                                                                                  X12 )
                               v1         v2         v1
                      2
                 =                                             {        ii        (i; j) + (1 −                   ii    ) (i; i ; j)}2 :
                           i=1 j=1 i =1


(viii)          tr(( k1 )−2 X12                          −1
                                                               X3 X3               −1
                                                                                            X2 X 2      −1
                                                                                                              X3 X 3           −1
                                                                                                                                    X12 )
                                    v1         v2         v2
                           2
                      =                                            {         jj    (i; j) + (1 −                        jj   ) (i; j; j )}2 :
                                i=1 j=1 j =1


   The reader can ÿnd the proof of Lemma 3.1(i) – (viii) in the appendix at the end of
the present paper.
   Hereafter, let ( ) ⊂ ˜ ( ) be the set of design matrices of incomplete split-block
designs (V1 ; V2 ; B) such that (Vi ; Bi ) are BIBDs. Then, by virtue of this lemma, the
following can be obtained.
150        K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


Lemma 3.2. For a design matrix X ∈                                ( );


         tr(C(   ) (X )) = (k1 k2 0          − )b
                                                
                                                                                                    v1   v2
                                       Á1           [ − k1 {
                            −                                          0   + (k2 − 1) 1 }]2                   (i; j)2
                                   k1 k 2 v 1   1
                                                                                                    i=1 j=1
                                                                                                                 
                                                                                      v1    v2
                                + [ − k1 {          2   + (k2 − 1) 3 }]2                             (i; i ; j)2 }
                                                                                     i=1 j=1 i =i
                                                    
                                                                                                    v1   v2
                                       Á2           [ − k2 {                            2
                            −                                          0 + (k1 − 1) 2 }]                      (i; j)2
                                   k1 k 2 v 2   2
                                                                                                    i=1 j=1
                                                                                                                 
                                                                                      v1    v2
                                + [ − k2 {          1   + (k1 − 1) 3 }]2                             (i; j; j )2 }
                                                                                     i=1 j=1 j =j


holds.

Proof. By Lemma 3.1(i) – (viii); the assertion is proved.

  Furthermore, the trace of the C-matrix can be bounded from the above as follows:

Lemma 3.3. Let X ∗ ∈ ( ) be a design matrix for an incomplete split-block design
such that (i; i ; j) and (i; j; j ) are constants not depending on the choice of i; i ; j
and j . Then (i; j) is also a constant. Moreover;

                                                ∗
         tr(C(   ) (X )) 6 tr(C( ) (X               ))
                                                                                   2    ∗
                                                                                   11 Q
                         = (k1 k2        0   − )b −
                                                               k1 k2   1 2 (v1      − 1)(v2 − 1)

holds for any X ∈           ( );      where             11   = (i; j) and

         Q∗ =     2 v2 Á1 (v2   − 1)[(v1 − 1)[ − k1 {                      0   + (k2 − 1) 1 }]2

                 + (k1 − 1)2 [ − k1 {               2   + (k2 − 1) 3 }]2 ]

                 +   1 v1 Á2 (v1   − 1)[(v2 − 1)[ − k2 {                       0   + (k1 − 1) 2 }]2

                 + (k2 − 1)2 [ − k2 {               1   + (k1 − 1) 3 }]2 ]:

Proof. Firstly; it is obvious that (i; j) is constant if either (i; i ; j) is constant for
any i or (i; j; j ) is constant for any j . By (28) with the Schwarz inequality; it is
         K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                               151


shown that
                                                                2
                               1                                              (k1 − 1)2
              (i; i ; j)2 ¿                            (i; i ; j)        =                (i; j)2
                            v1 − 1                                             v1 − 1
      i =i                                 i =i


and
                                                                   2
                               1                                              (k2 − 1)2
              (i; j; j )2 ¿                            (i; j; j )           =             (i; j)2 ;
                            v2 − 1                                             v2 − 1
      j =j                                 j =j


where both the equalities hold if and only if
                        k1 − 1                                                  k2 − 1
        (i; i ; j) =           (i; j)       and            (i; j; j ) =                (i; j)
                        v1 − 1                                                  v2 − 1

hold; not depending on the choice of i and j . By (5); (6) and Lemma 3.2; it is
shown that
                                                                                                      v1   v2
                                                                              Q∗
      tr(C(    ) (X )) 6 ( 0 k1 k2    − )b −                                                                    (i; j)2 :
                                                        k 1 k 2 v1 v2   1   2 (v1 − 1)(v2 − 1)
                                                                                                    i=1 j=1

Furthermore;
                                                               2
              v1   v2                    v1      v2            
      v1 v2               (i; j)2 ¿                       (i; j)   = (bk1 k2 )2
                                                               
              i=1 j=1                     i=1 j=1


holds; where the equality holds if and only if

        (i; j) = constant(=       11 );

not depending on the choice of i and j. Thus the proof is completed.

  Now, the main theorem of this section will be given.

Theorem 3.2. Let X ∗∗ ∈ ( ) be a design matrix for an incomplete split-block design
(V1 ; V2 ; B) such that (i; i ; j; j ) is a constant not depending on the choice of i; i ; j
and j . Then (i; j); (i; i ; j); (i; j; j ) are also constants. Hence X ∗∗ is universally
optimum relative to the set ( ) of design matrices for incomplete split-block designs
such that (Vi ; Bi ) are BIBDs.

Proof. The C-matrix C( ) (X ) of (22) is obtained as a weighted sum of the terms
in Lemma 3.1(i) – (viii) and the trace of C( ) (X ) is obtained as in Lemma 3.2. If
 (i; i ; j; j ) is constant not depending on i; i ; j and j ; then for X ∗∗ ; (i; j); (i; i ; j)
and (i; j; j ) are constants; because of (26) – (28). Therefore; it holds that the trace
152           K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


of C( ) (X ∗∗ ) attains the maximum of Lemma 3.3. Furthermore; by examining each
element of C( ) (X ∗∗ ); it can be shown that it has the form of Theorem 3.1(i); where
                                                                                                                          2
                      (k1 − 1)(k1 v1 − 3v1 − 2k1 + 4) (k2 − 1)(k2 v2 − 3v2 − 2k2 + 4)                                     11
         s=−                                         +
                             v1 Á1 1 (v1 − 1)2               v2 Á2 2 (v2 − 1)2                                         k 1 k2
                      0   + k2 1 + k1      2
                −                               22
                             k1 k 2

and      22   = (i; i ; j; j ). Thus; the proof is completed by Theorem 3.1.

                          2
  When           =            I , Lemmas 3.2, 3.3 and Theorem 3.2 can be reduced to the following.

Corollary 3.1. Under the same assumption as Lemmas 3.2; 3.3 and Theorem 3.2;
when 1 = 2 = 3 = 0; the following hold:

   (i)        tr(C(   ) (X )) = (k1 k2      − 1)b
                                                                                                                             
                                             1                            v1   v2               v1   v2                      
                                    −                     (k − 1)2                   (i; j)2 +                  (i; i ; j)2
                                        k 1 k2 v 1   1    1                                                                  
                                                                          i=1 j=1                i=1 j=1 i =i
                                                                                                                             
                                            1                             v1   v2               v1   v2                      
                                    −                     (k − 1)2                   (i; j)2 +                  (i; j; j )2       ;
                                      k 1 k2 v 2     2    2                                                                  
                                                                          i=1 j=1                i=1 j=1 j =j


                                                         ∗
   (ii)       tr(C(       ) (X )) 6 tr(C( ) (X               ))
                                                                  v1 v2 {v1 k2 (k1 − 1) + v2 k1 (k2 − 1)}   2
                                   = (k1 k2 − 1)b −                                  2 2                    11 ;
                                                                                    k1 k2 b

  (iii) X ∗∗ is universally optimum relative to the set                                 ( )   of incomplete split-block
designs such that (Vi ; Bi ) are BIBDs.



4. A construction of a balanced incomplete split-block design and its combinatorial
property

  We now consider constructions of universally optimum designs under Model II.
  When (i; i ; j; j ) is a constant, an incomplete split-block design (V1 ; V2 ; B) is called
a balanced incomplete split-block design, denoted by BISBD(v1 ; k1 ; v2 ; k2 ; 22 ). In this
design, for any distinct four treatments Ai ; Ai ∈ V1 and Cj ; Cj ∈ V2 , there are ex-
actly 22 superblocks which contain the four treatments simultaneously. For 0 6 m 6 2
and 0 6 n 6 2, let mn be the number of the blocks containing any m-subset of
V1 and n-subset of V2 . In the BISBD, (i; i ; j; j ) = 22 , (i; j; j ) = 12 , (i; i ; j) =
 21 , (i; j) = 11 , are all constants, and there are the following relationships among
         K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                   153


parameters:
             v1 − 1                      v2 − 1                           v2 − 1        v1 − 1
        12   =          22 ;    21   =                22 ;      11   =             12   =        21 ;
             k1 − 1                      k2 − 1                           k2 − 1        k1 − 1
             v2                 v1                           v1 − 1                   v2 − 1
        20 =     21 ;      02   =        12 ;    10   =                   20 ;  01 =          02 ;
             k2                 k1                           k1 − 1                   k2 − 1
             v2                 v1                                v1            v2
        10 =     11 ;      01 =          11     and          00 =          10 =     01 = b:
             k2                 k1                                k1            k2
It is obvious that (V1 ; B1 ) is a BIBD(v1 ; k1 ; 20 ), while (V2 ; B2 ) is a BIBD(v2 ; k2 ;                   02 ).
   Here a construction of BISBDs is provided.

Theorem 4.1. The existence of a BIBD(v1 ; k1 ;                       1)   and a BIBD(v2 ; k2 ;    2)    implies the
existence of a BISBD(v1 ; k1 ; v2 ; k2 ; 1 2 ).

Proof. When (Vi ; Bi ) is a BIBD(vi ; ki ; i ); i =1; 2; let B={(B1 |B2 ) | B1 ∈ B1 ; B2 ∈ B2 }.
Then (V1 ; V2 ; B) is the required design.

  The BISBD constructed in the proof of Theorem 4.1 is here called a direct product
of two BIBDs.

Lemma 4.1. If there exists a BISBD(v1 ; k1 ; v2 ; k2 ; 22 ); then there are a BIBD
(v1 ; k1 ; 22 ) and a BIBD(v2 ; k2 ; 22 ). In this case b1 b2 =b = 22 holds; where b is the
number of superblocks of the BISBD(v1 ; k1 ; v2 ; k2 ; 22 ); and b1 and b2 are the num-
bers of blocks of the BIBD(v1 ; k1 ; 22 ) and the BIBD(v2 ; k2 ; 22 ); respectively.

Proof. For any x; y ∈ V1 ; let Bxy = {B ∩ V2 | x; y ∈ B; B ∈ B}. Then Bxy forms a
BIBD(v2 ; k2 ; 22 ). Note that |Bxy | = 20 = b2 . Similarly; for any a; b ∈ V2 ; let Bab = {B ∩
V1 | a; b ∈ B; B ∈ B}. Then Bab forms a BIBD(v1 ; k1 ; 22 ) and hence |Bab | = 02 = b1
holds. Moreover; by the combinatorial properties of BISBDs 22 = 02 20 = 00 = b1 b2 =b
holds.

Theorem 4.2. If there exists a BISBD(v1 ; k1 ; v2 ; k2 ; 1); then a BISBD having the same
parameters as this design is constructed as a direct product of a BIBD(v1 ; k1 ; 1) and
a BIBD(v2 ; k2 ; 1).

Proof. By Lemma 4.1; the assumption shows that two BIBD(v1 ; k1 ; 1) and BIBD
(v2 ; k2 ; 1) exist. Hence; by Theorem 4.1; a BISBD(v1 ; k1 ; v2 ; k2 ; 1) can be constructed.
Therefore the BISBD(v1 ; k1 ; v2 ; k2 ; 1) can be constructed as a direct product of a BIBD
(v1 ; k1 ; 1) and a BIBD(v2 ; k2 ; 1).

  Theorem 4.2 means that if 22 =1, there cannot exist a BISBD with lesser number of
superblocks than that given as a direct product of two BIB designs. But when 22 ¿ 1,
the authors do not know whether there is a set of such parameters that there does not
exist any BISBD of direct product type but that there exists a BISBD of non-direct
product type.
154           K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


  Let        i,   i = 1; 2, be the smallest positive integer such that

             i (vi   − 1) = ri (ki − 1)              and        v i ri = b i k i

are satisÿed for given vi and ki . If there are a BIBD(v1 ; k1 ; 1 ) and a BIBD(v2 ; k2 ; 2 ),
then Theorem 4.1 shows the existence of a BISBD(v1 ; k1 ; v2 ; k2 ; 1 2 ). The present
question is whether we can construct a BISBD(v1 ; k1 ; v2 ; k2 ; 22 ) such that 22 ¡ 1 2 .
For this question, we checked the possibility of 22 ¡ 1 2 within the scope of param-
eters 3 6 v1 ; v2 6 1000, 2 6 k1 ; k2 6 50 and for 1 6 1 ; 2 6 20 by using computer.
Then it reveals that there are no such parameters within these ranges. Thus it may be
hard to ÿnd a BISBD of non-direct product type.


Acknowledgements

  The authors are thankful to the referees for their constructive comments on this
paper.


Appendix

  We give the proof of Lemma 3.1(i) – (viii) in Section 3.2.

Proof of Lemma 3.1. (i) Let (X12 −1 X12 )[i; j; i ; j ] be the ((i; j); (i ; j ))th entry of the
matrix X12 −1 X12 . For x12 (t; r; c; i; j) deÿned by (3); let
                                        1 if x12 (t; r; c; i; j) = 1 for some r;
         x12 (t; ∗; c; i; j) =
                                        0 otherwise;

                                        1 if x12 (t; r; c; i; j) = 1 for some c;
         x12 (t; r; ∗; i; j) =
                                        0 otherwise;

                                        1 if x12 (t; r; c; i; j) = 1 for some r and c;
         x12 (t; ∗; ∗; i; j) =
                                        0 otherwise:

Then the (i; j; t; r; c)-elements of X12 R; X12 C and X12 RC are written by x12 (t; r; ∗; i; j);
x12 (t; ∗; c; i; j) and x12 (t; ∗; ∗; i; j). It follows from (3) that
                  −1
      (X12             X12 )[i; j; i ; j ] =    0 X12 X12        +     1 X12 RX12        +     2 X12 CX12         +   3 X12 RCX12
                                                b      k1       k2
                                         =                            { 0 x12 (t; r; c; i; j) +             1 x12 (t; r; ∗; i; j)
                                               t=1 r=1 c=1

                                               + 2 x12 (t; ∗; c; i; j) +                 3 x12 (t; ∗; ∗; i; j)}x12 (t; r; c; i         ;j )
                                         =      ii    jj    0    (i; j) +          ii   (1 −   jj   )   1    (i; j; j )
                                               + (1 −       ii    )   jj   2   (i; i ; j) + (1 −             ii   )(1 −   jj   )   3   (i; i ; j; j );
         K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                                                                      155


where (i; j); (i; i ; j); (i; j; j ) and (i; i ; j; j ) are deÿned in Section 3.2. Hence it
is seen that
                                            v1         v2
                −1
      tr(X12          X12 ) =          0                         (i; j) =            0 k1 k2 b;
                                           i=1 j=1

which completes the proof.
  (ii) Let (X12 −1 X3 )[i; j; t] be the ((i; j); t)th entry of the matrix X12                                                                        −1
                                                                                                                                                          X3 . By (3),
                                                 b      k1        k2
               −1
      (X12          X3 )[i; j; t] =                                    { 0 x12 (t ; r; c; i; j) +                     0 x12 (t   ; r; ∗; i; j)
                                            t =1 r=1 c=1

                                            + 0 x12 (t ; ∗; c; i; j) +                          0 x12 (t      ; ∗; ∗; i; j)}x3 (t ; r; c; t)
                                        =            12 (i; j; t);

holds, where
                                  1 if Ai and Cj are applied to a block Bt ;
       12 (i; j; t)   =
                                  0 otherwise:
Furthermore, it follows that
                                                                                     b
               −1                 −1                                         2
      (X12          X3 X 3             X12 )[i; j; i ; j ] =                               12 (i; j; t) 12 (i              ; j ; t)
                                                                                 t=1
                                                                             2
                                                                       =         {    ii   jj       (i; j) +          ii   (1 −           jj   ) (i; j; j )
                                                                            + (1 −          ii      )   jj     (i; i ; j)
                                                                            + (1 −          ii      )(1 −        jj   ) (i; i ; j; j )};
which completes the proof.
  (iii) It follows from (3) that
                                                 b          k1    k2
               −1
      (X12          X1 )[i; j; i ] =                                    { 0 x12 (t; r; c; i; j) +                     1 x12 (t; r; ∗; i; j)
                                             t=1 r=1 c=1

                                             + 2 x12 (t; ∗; c; i; j) +                          3 x12 (t; ∗; ∗; i; j)}x1 (t; r; c; i                   )
                                        =        ii     (i; j){         0   + (k2 − 1) 1 }
                                             + (1 −               ii   ) (i; i ; j){            2   + (k2 − 1) 3 }
and
               −1                 −1
      (X12          X1 X 1             X12 )[i; j; i ; j ]
               v1
          =           [     ii    (i; j){        0   + (k2 − 1) 1 } + (1 −                              ii    ) (i; i ; j){           2   + (k2 − 1) 3 }]
             i =1

               ×[     i i        (i ; j ){       0    + (k2 − 1) 1 } + (1 −                             i i    ) (i ; i ; j ){                 2   + (k2 − 1) 3 }]:
Hence, the assertion is proved.
156      K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157


  (iv) By (3),
                                                b        k1       k2
              −1
      (X12         X2 )[i; j; j ] =                                        { 0 x12 (t; r; c; i; j) +           1 x12 (t; r; ∗; i; j)
                                              t=1 r=1 c=1

                                              + 2 x12 (t; ∗; c; i; j) +                     3 x12 (t; ∗; ∗; i; j)}x2 (t; r; c; j           )
                                          =     jj       (i; j){           0   + (k1 − 1) 2 }
                                              + (1 −              jj   ) (i; j; j ){            1   + (k1 − 1) 3 }

and
              −1                   −1
      (X12         X2 X 2               X12 )[i; j; i ; j ]
              v2
          =         [     jj       (i; j){     0    + (k1 − 1) 2 } + (1 −                            jj    ) (i; j; j ){    1   + (k1 − 1) 3 }]
             j =1

              ×[    j j        (i ; j ){        0   + (k1 − 1) 2 } + (1 −                            j j   ) (i ; j ; j ){          1   + (k1 − 1) 3 }]

hold. Hence, the proof is completed.
  (v) We deÿne that

                                   1 if x1 (t; r; c; i) = 1 for some r and c;
      x1 (t; ∗; ∗; i) =
                                   0 otherwise:

                                                    −1
It is easy to check that (X3                             X1 )[t; i] = k2 x1 (t; ∗; ∗; i). Then
                                                                       b
              −1                −1
      (X12         X3 X 3               X1 )[i; j; i ] =                          12 (i; j; t   ) k2 x1 (t ; ∗; ∗; i )
                                                                  t =1
                                                                       2
                                                              =            k2 {    ii   (i; j) + (1 −           ii   ) (i; i ; j)}:               (A.1)

Hence it can be shown that
              −1                   −1                −1
      (X12         X3 X 3               X1 X1             X12 )[i; j; i ; j ]
                        v1
          = 2 k2               {   ii     (i; j) + (1 −                     ii    ) (i; i ; j)}
                    i =1

              ×[    i i        (i ; j ){      0     + (k2 − 1) 1 } + (1 −                            i i   ) (i ; i ; j ){      2   + (k2 − 1) 3 }];

which proves the required result.
  (vi) We deÿne that

                                   1 if x2 (t; r; c; j) = 1 for some r and c;
      x2 (t; ∗; ∗; j) =
                                   0 otherwise:
          K. Ozawa et al. / Journal of Statistical Planning and Inference 106 (2002) 135 – 157                                            157


It is easy to check that
                                                                  b
               −1                −1
       (X12         X3 X 3             X2 )[i; j; j ] =                          12 (i; j; t   ) k1 x2 (t ; ∗; ∗; j )
                                                              t =1
                                                                 2
                                                          =          k1 {        jj   (i; j) + (1 −        jj   ) (i; j; j )}:         (A.2)
Then it follows that
               −1                −1                −1
       (X12         X3 X 3             X2 X2            X12 )[i; j; i ; j ]
                       v2
           = 2 k1            {     jj     (i; j) + (1 −                    jj    ) (i; j; j )}
                     j =1

              ×[     j j     (i ; j ){         0   + (k1 − 1) 2 } + (1 −                         j j   ) (i ; j ; j ){   1   + (k1 − 1) 3 }]:
Hence the result is proved.
  (vii) By (A.1),
               −1                −1                −1                  −1
       (X12         X3 X 3             X1 X1            X3 X 3                  X12 )[i; j; i ; j ]
                       v1
                2
           = 4 k2            {    ii     (i; j) + (1 −                ii    ) (i; i ; j)}
                     i =1

              ×{      i i    (i ; j ) + (1 −               i i    ) (i ; i ; j )}
which completes the proof.
  (viii) By (A.2),
               −1                −1                −1                  −1
       (X12         X3 X 3             X2 X2            X3 X 3                  X12 )[i; j; i ; j ]
                       v2
                2
           = 4 k1            {    jj     (i; j) + (1 −                 jj       ) (i; j; j )}
                     j =1

              ×{      j j        (i ; j ) + (1 −            j j       ) (i ; j ; j )}:
Hence the required result is readily proved.


References

Bhargava, R.P., Shah, K.R., 1975. Analysis of some mixed models for block and split-plot designs. Ann.
   Inst. Statist. Math. 27, 365–375.
Hering, F., Mejza, S., 1997. Incomplete split-block designs. Biometrical J. 39, 227–238.
Kiefer, J., 1975. Construction and optimality of generalized Youden designs II. In: Srivastava, J.N. (Ed.), A
   Survey of Statistical Designs and Linear Models. North-Holland, Amsterdam, pp. 333–353.
Mejza, I., Mejza, S., 1996. Incomplete split-plot designs generated by GDP-BIBD(2). Calcutta Statist. Assoc.
   Bull. 46, 117–127.
Rees, H.D., 1969. The analysis of variance of some non-orthogonal designs with split-plot. Biometrika. 56,
   43–54.
Robinson, J., 1970. Blocking in incomplete split-plot designs. Biometrika 57, 347–350.

								
To top