faculty.washington.edukayeetalksfinal_path501.ppt

Document Sample
faculty.washington.edukayeetalksfinal_path501.ppt Powered By Docstoc
					Cluster Analysis for Gene
     Expression Data

                Ka Yee Yeung
   http://staff.washington.edu/kayee/research.html
      Center for Expression Arrays
       Department of Microbiology
            kayee@u.washington.edu
    A gene expression data set

                               ……..


                                       p experiments
• Snapshot of activities
  in the cell
• Each chip represents       n genes
  an experiment:                          Xij
   – time course
   – tissue samples
     (normal/cancer)
10/18/2002         Ka Yee Yeung, CEA                   2
             What is clustering?
• Group similar objects together
• Objects in the same cluster (group)
  are more similar to each other than
  objects in different clusters
• Data exploratory tool: to find patterns
  in large data sets
• Unsupervised approach: do not make
  use of prior knowledge of data

10/18/2002         Ka Yee Yeung, CEA    3
     Applications of clustering
       gene expression data
• Cluster the genes è functionally related
  genes
• Cluster the experiments è discover new
  subtypes of tissue samples
• Cluster both genes and experiments è
  find sub-patterns

 10/18/2002      Ka Yee Yeung, CEA      4
       Examples of clustering
            algorithms
• Hierarchical clustering algorithms eg.
  [Eisen et al 1998]
• K-means eg. [Tavazoie et al. 1999]
• Self-organizing maps (SOM) eg. [Tamayo et
  al. 1999]
• CAST [Ben-Dor, Yakhini 1999]
• Model-based clustering algorithms eg.
  [Yeung et al. 2001]

10/18/2002       Ka Yee Yeung, CEA        5
                      Overview
• Similarity/distance measures
• Hierarchical clustering algorithms
    – Made popular by Stanford, ie. [Eisen et al.
       1998]

• K-means
    – Made popular by many groups, eg. [Tavazoie
       et al. 1999]
• Model-based clustering algorithms
   [Yeung et al. 2001]
10/18/2002               Ka Yee Yeung, CEA          6
     How to define similarity?
          1 Experiments   p                              X   genes        n
      1
     X
  genes




                                             genes
      Y
                                                  Y
     n                                            n
          Raw matrix                                  Similarity matrix
• Similarity measures:
    – A measure of pairwise similarity or dissimilarity
    – Examples:
          • Correlation coefficient
          • Euclidean distance
10/18/2002                    Ka Yee Yeung, CEA                               7
             Similarity measures
             (for those of you who enjoy equations…)


• Euclidean distance



• Correlation coefficient




10/18/2002                Ka Yee Yeung, CEA            8
                      Example




    Correlation (X,Y) = 1        Distance (X,Y) = 4
    Correlation (X,Z) = -1       Distance (X,Z) = 2.83
    Correlation (X,W) = 1        Distance (X,W) = 1.41

10/18/2002              Ka Yee Yeung, CEA                9
  Lessons from the example
• Correlation – direction only
• Euclidean distance – magnitude &
  direction
• Array data is noisy è need many
  experiments to robustly estimate
  pairwise similarity


10/18/2002      Ka Yee Yeung, CEA    10
         Clustering algorithms

• From pairwise similarities to groups
• Inputs:
    – Raw data matrix or similarity matrix
    – Number of clusters or some other
      parameters




10/18/2002         Ka Yee Yeung, CEA         11
      Hierarchical Clustering
             [Hartigan 1975]
             • Agglomerative (bottom-up)
             • Algorithm:
               – Initialize: each item a
                 cluster
               – Iterate:
dendrogram
                  • select two most similar
                    clusters
                  • merge them
               – Halt: when required number
                 of clusters is reached
10/18/2002      Ka Yee Yeung, CEA             12
    Hierarchical: Single Link
 • cluster similarity = similarity of two
   most similar members

                                       - Potentially
                                       long and skinny
                                       clusters
                                       + Fast




10/18/2002         Ka Yee Yeung, CEA                13
             Example: single link




                                       5
                                       4
                                       3
                                       2
                                       1
10/18/2002         Ka Yee Yeung, CEA       14
             Example: single link



                                       5
                                       4
                                       3
                                       2
                                       1

10/18/2002         Ka Yee Yeung, CEA       15
             Example: single link



                                       5
                                       4
                                       3
                                       2
                                       1

10/18/2002         Ka Yee Yeung, CEA       16
Hierarchical: Complete Link
 • cluster similarity = similarity of two least
   similar members


                                       + tight clusters
                                       - slow




10/18/2002         Ka Yee Yeung, CEA                  17
       Example: complete link



                                   5
                                   4
                                   3
                                   2
                                   1

10/18/2002     Ka Yee Yeung, CEA       18
       Example: complete link



                                   5
                                   4
                                   3
                                   2
                                   1

10/18/2002     Ka Yee Yeung, CEA       19
       Example: complete link



                                   5
                                   4
                                   3
                                   2
                                   1

10/18/2002     Ka Yee Yeung, CEA       20
  Hierarchical: Average Link
 • cluster similarity = average similarity of
   all pairs


                                       + tight clusters
                                       - slow




10/18/2002         Ka Yee Yeung, CEA                  21
Software: TreeView                 [Eisen et al. 1998]


                   • Fig 1 in Eisen’s PNAS 99
                     paper
                   • Time course of serum
                     stinulation of primary
                     human fibrolasts
                   • cDNA arrays with
                     approx 8600 spots
                   • Similar to average-link
                   • Free download at:
                      http://rana.lbl.gov/EisenSoftware.htm


10/18/2002   Ka Yee Yeung, CEA                          22
                      Overview
• Similarity/distance measures
• Hierarchical clustering algorithms
    – Made popular by Stanford, ie. [Eisen et al.
       1998]

• K-means
    – Made popular by many groups, eg. [Tavazoie
       et al. 1999]
• Model-based clustering algorithms
   [Yeung et al. 2001]

10/18/2002               Ka Yee Yeung, CEA          23
             Partitional: K-Means
                 [MacQueen 1965]
1                                       2




3




10/18/2002          Ka Yee Yeung, CEA       24
             Details of k-means
• Iterate until converge:
     – Assign each data point to the closest
       centroid
     – Compute new centroid


Objective function:
             Minimize



10/18/2002              Ka Yee Yeung, CEA      25
       Properties of k-means
• Fast
• Proved to converge to local optimum
• In practice, converge quickly
• Tend to produce spherical, equal-sized
  clusters
• Related to the model-based approach
• Gavin Sherlock’s Xcluster:
    http://genome-www.stanford.edu/~sherlock/cluster.html



10/18/2002             Ka Yee Yeung, CEA                    26
 What we have seen so far..
• Definition of clustering
• Pairwise similarity:
   – Correlation
   – Euclidean distance
• Clustering algorithms:
   – Hierarchical agglomerative
   – K-means
• Different clustering algorithms è different
  clusters
• Clustering algorithms always spit out clusters
10/18/2002           Ka Yee Yeung, CEA       27
 Which clustering algorithm
       should I use?
• Good question
• No definite answer: on-going research
• Our preference: the model-based
  approach.




10/18/2002     Ka Yee Yeung, CEA      28
 Model-based clustering (MBC)
• Gaussian mixture model:
   – Assume each cluster is generated by the
     multivariate normal distribution
   – Each cluster k has parameters :
       • Mean vector: mk
              – Location of cluster k
       • Covariance matrix: Sk
              – volume, shape and orientation of cluster k
• Data transformations & normality
  assumption
 10/18/2002                   Ka Yee Yeung, CEA              29
   More on the covariance matrix Sk
                 (volume, orientation, shape)
 Equal volume, spherical (EI)        unequal volume, spherical (VI)




Equal volume,
                       Diagonal model            Unconstrained (VVV)
orientation, shape
(EEE)




  10/18/2002               Ka Yee Yeung, CEA                      30
  Key advantage of the model-
        based approach:
   choose the model and the
       number of clusters
• Bayesian Information Criterion (BIC)
 [Schwarz 1978]
   – Approximate p(data | model)
• A large BIC score indicates strong
  evidence for the corresponding model.
10/18/2002        Ka Yee Yeung, CEA      31
          Gene expression data sets

•      Ovary data     [Michel Schummer, Institute of
       Systems Biology]
      – Subset of data : 235 clones (portions of
         genes)
           24 experiments (cancer/normal tissue
           samples)
      –    235 clones correspond to 4 genes (external
           criterion)


    10/18/2002          Ka Yee Yeung, CEA              32
BIC analysis: square root ovary data




• EEE and diagonal models -> first local max at
  4 clusters
• Global max -> VI at 8 clusters
 10/18/2002        Ka Yee Yeung, CEA         33
How do we know MBC is doing well?
    Answer: compare to external info




• Adjusted Rand: max at EEE 4 clusters (>
  CAST)

10/18/2002       Ka Yee Yeung, CEA          34
                 Take home messages
•     MBC has superior performance on:
     – Quality of clusters
     – Number of clusters and model chosen (BIC)
•     Clusters with high BIC scores tend to produce a high
      agreement with the external information
•     MBC tends to produce better clusters than a
      leading heuristic-based clustering algorithm
      (CAST)
•     Splus or R versions:
     http://www.stat.washington.edu/fraley/mclust/

    10/18/2002           Ka Yee Yeung, CEA             35

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:0
posted:7/24/2013
language:English
pages:35