Docstoc

The University of Adelaide

Document Sample
The University of Adelaide Powered By Docstoc
					   Bayesian Inference for Signal Detection
       Models of Recognition Memory



       Simon Dennis                     Michael Lee
    School of Psychology       Department of Cognitive Sciences
    University of Adelaide        University California Irvine
simon.dennis@adelaide.edu.au           mdlee@uci.edu
                     The Task

• Study a list of words
• Tested on another list of words, some of which
  appeared on the first list
• Subjects have to say whether each word is an old or
  new word
• Data take the form of counts
   – Hits
   – Misses
   – False Alarms
   – Correct Rejections
Signal Detection Model
  Unofficial SDT Analysis Procedure
• Calculate a hit rate and false alarm rate per
  subject per condition
• Add a bit to false alarm rates of 0 and
  subtract a bit from hit rates of 1 to avoid
  infinite d'
• Throw out any subjects that are inconsistent
  with your hypothesis
• Run ANOVA
• While p > 0.05:
      collect more data
      Run ANOVA
• Publish
              What we would like
• Sampling variability - variability estimates should
  depend on how many samples you have per cell
• Edge corrections – should follow from analysis
  assumptions
• Excluding subjects – should be done in a principled
  way
• Evidence in favour of null hypothesis
• Used iteratively – without violating likelihood
  principle
• Small sample sizes – not only applicable in the
  limit
      A Proposal: The Individual Level
• Assume hits and false alarms are drawn from a
  binomial distribution (which allows us to generate a
  posterior distribution for the underlying rates that
  generated the data)
• Assume that both hits and false alarms are possible
  (given this the least informative prior about them is
  uniform)
• Assume d' and C are independent (which is true iff
  the hit rate and false alarm rates are independent)
• With these assumptions d' and C will be distributed as
  Gaussians
         A Proposal: The Group Level
• Within subjects model        D= d 'C1− d ' C2

• Error Model:       D~ N 0,
• Error + Effect Model:

Error~ N 0,      1             Effect ~ TN ,      2

                 Member~ Bernoulli r
       D~ Member Error 1− Member Effect
                   List Length Data
              (Dennis & Humphreys 2001)

• Is the list length effect in recognition memory a
  necessary consequence of interference between list
  items?
• List types

   Long      |---------Study---------|Filler|--Test--|
   Short Start |-Study-|-------Filler---------|--Test--|
   Short End |----Filler-----|-Study-|Filler|--Test--|
    Sampling Variability




Rate Parameterization Posteriors
       Sampling Variability




Discriminability & Criterion Posteriors
       Sampling Variability




Discriminability & Log-Bias Posteriors
Edge Corrections

        • Always assuming a beta
          posterior distribution of
          rates – never a single
          number
        • Assumption of uniform
          priors provides
          principled method for
          determining “degree” of
          correction
            Excluding Subjects




Guess vs SDT Model in the Short-Start Condition
                       List Length Data
                  (Kinnell & Dennis in prep)

• Does contextual reinstatement create a length effect?
• List types

Long Filler     |---------Study---------|Filler|--Test--|
Short Filler    |-Study-|-------Filler---------|--Test--|
Long NoFiller     |---------Study---------|--Test--|
Short NoFiller |-Study-|-------Filler--|--Test--|
Evidence in Favour of Null
    Filler Error Model
Evidence in Favour of Null
Filler Error + Effect Model
Evidence in Favour of Null
   No Filler Error Model
 Evidence in Favour of Null
No Filler Error + Effect Model
Evidence in Favour of Null
  Frequency Error Model
  Evidence in Favour of Null
Frequency Error + Effect Model
    Evidence in Favour of Null Hypothesis:
                  Summary


            Log-likelihood Log-likelihood Difference
  Factor     Error Only Error + Effect +ve = Effect    Odds

Filler             -78.05         -81.07      -3.02       20:1
No Filler          -82.06         -82.85      -0.79        2:1
Frequency          -78.98         -67.96      11.01    60476:1
                 Used Iteratively
• Likelihood principle:
   – Inference should depend only on the outcome of
     the experiment not on the design of the
     experiment
• Conventional statistical inference procedures violate
  likelihood principle
• They cannot be used safely iteratively because as you
  increase sample size you change the design
• Bayesian methods (like ours) avoid this problem
               Small Sample Sizes

• No asymptotic assumptions
• Applicable even with small samples


• Note: Still could be problems if there are strong
  violations of assumptions
                   Conclusions
• Sampling variability - variability estimates should
  depend on how many samples you have per cell
• Edge corrections – should follow from analysis
  assumptions
• Excluding subjects – should be done in a principled
  way
• Evidence in favour of null hypothesis
• Used iteratively – without violating likelihood
  principle
• Small sample sizes – not only applicable in the
  limit
Evidence in Favour of Null Hypothesis:
                Filler




                  d'
Evidence in Favour of Null Hypothesis: No
                 Filler




                    d'
Evidence in Favour of Null Hypothesis:
              Frequency




                  d'