Levels of processing memory model overview - Higher Level .pptx

Document Sample
Levels of processing memory model overview - Higher Level .pptx Powered By Docstoc
					Levels of processing memory
model overview
 Explain how the levels of processing model of memory

 SAQ comparing and contrasting two models of memory

 Explain how Biological factors affect cognitive processes
  (memory, amnesia & serotonin)
Develop your own theory…
 Imagine you are a theorist who really disagrees with
  Attkinson and Shriffen.
 Present a one page proposal for new ideas for a theory
  on memory.
 The basis of your proposal is provided by the
  weaknesses of the MSM (notes; Crane 75)- you are
  trying to overcome such weaknesses.
 To justify your proposal use examples from your own
  memory experience.
  Levels of Processing (Craik & Lockhart,
 This model was proposed as alternative to the multi-store
  model. Craik & Lockhart rejected the idea of separate
  memory structures put forward by Atkinson & Shiffrin
  The model places an emphasizes memory
   process rather than the structure like the MSM
  The LOP model based on the idea that the
   strength of a memory trace is determined by
   how the original information was processed
  LOP: Shallow & Deep Processing
 The model proposed that there are different levels of
  processing that influence how much is remember

 Shallow processing – the 1st stage of processing – e.g.
  recognising the stimulus in terms of its physical appearance
  or structure – e.g. the shape of the letters a word is written

 Deep processing – the deepest level of processing involves
  encoding the input in terms its meaning (semantics)

 The model assumes that shallow processing will lead to
  weak short term retention and deep processing will enable
  long term retention
  Shallow processing                       Deep processing

     Structural          Phonological         Semantic
    (looks like)         (sounds like)        (means)

Weak memory trace                        Strong memory trace
                                         – leading to long term
-leading so short term
  LOP: Maintenance & Elaborative Rehearsal
 The model also proposed that different ways of rehearsing also
   have an influence on how well we remember:

1. Rehearsing material simply by rote repetition is called
   maintenance rehearsal and is regarded as shallow processing.

2. Making links with semantic (meaning) associations is called
   elaborative rehearsal and is seen as deep processing

 The assumption of the model is that shallow processing will
  give rise to weak short term retention and deep processing will
  ensure strong, lasting retention
     Research Studies for the LOP Model
     (support & criticisms):
1. Elias & Perfetti (1973) study of acoustic & semantic
   encoding & Hyde and Jenkins (1973) effects of the
   way in which words are processed on recall
2. Tyler et al (1979) Cognitive Effort & Memory &
   Palmere et al. (1983) elaboration and recall
3. General Evaluative Points
General evaluative
points for LOP model of memory
  Elias & Perfetti (1973) study of acoustic & semantic encoding
 AIM: Elias & Perfetti (1973) aimed to investigate
  encoding and memory
 PROCEDURE: They gave PPs a number of different
  tasks to perform on each word in a list, such as:
             finding another word that rhymes
                             or
      finding a word that means the same or similar
              (synonym) to the word on the list.
 The rhyming task involved only acoustic coding and
  hence was a shallow level of processing.
 The synonym task involved semantic coding and hence
  was a deep level of processing.
 The participants were not told that they would be asked to
  recall the words, but nevertheless they did remember some
  of the words when subsequently tested.
 This is called incidental learning as opposed to intentional
  or deliberate learning.
  significantly more words following the synonym task than
  following the rhyming task,
 suggesting that deeper levels of processing leads to better
  recall and thus supporting the LOP model. EVALUATION:
  Ecological Validity/ Experimental research
  Hyde and Jenkins (1973) effects of the way in which words are
  processed on recall

 AIM: To investigate the effects of shallow & deep processing on
 PROCEDURE: Hyde and Jenkins (1973) presented auditorily lists
  of 24 words and asked different groups of participants to perform
  one of the following so-called orienting tasks:
                  ♦rating the words for pleasantness
   ♦estimating the frequency with which each word is used in the
                            English language
   ♦detecting the occurrence of the letters ‘e' and 'g' in any of the
 ♦deciding the part of speech appropriate to each word (e.g. noun,
♦deciding whether the words fitted into a particular sentence frame.
 Rating the words for pleasantness (e.g. is “donkey” a
  pleasant word?)
 Estimating the frequency with which each word is used
  in the English language (e.g. how often does “donkey”
  appear in the English language?)
 Detecting the occurrence of the letters “e” & “g” in the
  list words (e.g. is there an “e” or a “g” in the word
 Deciding the part of speech appropriate to each word
  (e.g. is “donkey” a verb, noun or an adjective?)
 Deciding whether the words fitted into particular
  sentences (e.g. does the word “donkey” fit into the
  following sentence > “I went to the doctor and showed
  him my ............”)
 Five groups of participants performed one of these tasks,
  without knowing that they were going to be asked to recall
  the words (incidental learning group)..
 An additional five groups of participants performed
 the tasks but they were told that they should learn the words.
  (intentional learning group)
 Finally, there was a control group of participants who were
  instructed to learn the words but did not do the tasks)
 FINDINGS & CONCLUSIONS After testing all the
  participants for recall of the original word list Hyde and
  Jenkins found that there were minimal differences in the
  number of items correctly recalled between the intentional
  learning groups and the incidental learning groups.

 This finding is predicted by Craik and Lockhart and
  supports LOP because they believe that retention is simply a
  byproduct of processing and so intention to learn is
  unnecessary for learning to occur.
 In addition, Hyde & Jenkins found that the pleasantness
  rating and rating frequency of usage tasks produced the
  best recall.
 it was found that recall was significantly better for
  words which had been analysed semantically (i.e. rated
  for pleasantness or for frequency) than words which
  had been rated more superficially (shallow) (i.e.
  detecting 'e' and 'g').
 This is also in line with the LOP model because
  semantic analysis is assumed to be a deeper level of
  processing than structural (shallow) analysis.
 They claimed that this was because these tasks involved
  semantic processing whereas the other tasks did not.
 one interesting finding was that incidental learners
  performed just as well as intentional learners in all tasks
  – this suggests that it is the nature of the processing
  that determines how much you will remember rather
  than intention to learn.
 Bear this in mind when you are revising – the more
  processing you perform on the information (e.g. quizzes,
  essays, spider diagrams etc.) the more likely you are to
  remember it .
 Not totally clear what level of processing is used for the
  different tasks.
 More processing = more time spent elaborating the
  material. Is this the same or are there two different
  factors involved? – is time different to elaboration?
 Ecological validity, experimental method/ applicability?
  The Criticisms/Limitations of the LOP Model
 It is usually the case that deeper levels of processing do lead
  to better recall.
 However, there is an argument about whether it is the
  depth of processing that leads to better recall or the amount
  of processing effort that produces the result – see Tyler et
  al (1970)
 Also any of the the MSM and research that supports can be
  used as a counter claim in evaluation of the LOP – as it fails
  to recognize that there are indeed two separate stores of
   Tyler et al (1979) Cognitive Effort &
 AIM: Tyler et al (1979) investigated the effects of cognitive
  effort on memory

 PROCEDURES: They gave participants two sets of anagrams
  to solve - easy ones, such as DOCTRO or difficult ones such
  as TREBUT.
 Afterwards, participants were given an unexpected test for
  recall of the anagram
  processing level was the same, because participants
  were processing on the basis of meaning, participants
  remembered more of the difficult anagram words than
  the easy ones.
 So Tyler et al concluded that retention is a function of
  processing effort, not processing depth.

 KEY EVALUATION POINT : Craik and Lockhart
  themselves (1986) have since suggested that factors
  such as elaboration and distinctiveness are also
  important in determining the rate of retention; this idea
  has been supported by research.

 For example, Hunt and Elliott (1980) found that people
  recalled words with distinctive sequences of tall and
  short letters better than words with less distinctive
  Palmere et al. (1983) elaboration and recall
 AIM: Palmere et al. (1983) a study of the effects of
   elaboration on recall
 PROCEDURE: They made up a 32- paragraph
   description of a fictitious African nation.
1. Eight paragraphs consisted of a sentence containing a
   main idea, followed by three sentences each providing
   an example of the main theme;
2. Eight paragraphs consisted of one main sentence
   followed by two supplementary sentences;
3. Eight paragraphs consisted of one main sentence
   followed by a single supplementary sentence
4. The remaining eight paragraphs consisted of a single
   main sentence with no supplementary information
 Recall of the main ideas varied as a function of the
  amount of elaboration (extra info given).
 Significantly more main ideas were recalled from the
  elaborated paragraphs than from the single-sentence
 This kind of evidence suggests that the effects of
  processing on retention are not as simple as first
  proposed by the levels of processing model.
 EVALUATION: suggests that elaboration is important –
  and Craik & Lockhart (1986) did update their model to
  include ‘elaboration & distinctiveness’ as having a major
  influence on retention
Evaluation of LOP
 Influential model – emphasis on mental processes rather
  than rigid structures, Descriptive rather that explanatory
  model – e.g. how do you define what is deep and shallow

 Does not include the amount of effort one puts into
  learning as an important factors

 However, Craik and Lockhart (1986) have suggested that
  elaboration on information and the distinctiveness of
  information are important in determining memory
  General evaluative points relating to the research

 Another problem is that participants typically spend a
  longer time processing the deeper or more difficult tasks.
 So, it could be that the results are partly due to more time
  being spent on the material.
 The type of processing, the amount of effort & the length of
  time spent on processing tend to be confounded.
 Deeper processing goes with more effort and more time, so
  it is difficult to know which factor influences the results.
 Associated with the previous point, it is often difficult with many
  of the tasks used in levels of processing studies to be sure what
  the level of processing actually is.
 For example, in the study by Hyde & Jenkins (described above)
  they assumed that judging a word’s frequency involved thinking
  of its meaning, but it is not altogether clear why this should be so.
 Also, they argued that the task of deciding the part of speech to
  which a word belongs is a shallow processing task - but other
  researchers claim that the task involves deep or semantic
 So, a major problem is the lack of any independent measure of
  processing depth. How deep is deep?
 A major problem with the LOP is circularity,
 i.e. there is no independent definition of depth.
 The model predicts that deep processing will lead to better
  retention - researchers then conclude that, because
  retention is better after certain orienting tasks, they must,
  by definition, involve deep processing
Eysenck (1978) claims
   “In view of the vagueness with which depth is defined, there is
         danger of using retention-test performance to provide
   information about the depth of processing and then using the ...
   depth of processing to ‘explain’ the retention-test performance, a
                   self-defeating exercise in circularity”.
 What he means is that if a person performs well on a test of recall
  after performing a particular task then some researchers will
  claim that they must have performed a deep level of processing
  on the information in order to remember it - a circular argument.
 Another objection is that levels of processing theory does not
  really explain why deeper levels of processing is more effective –
  it is descriptive rather than explanatory
 Eysenck (1990) claims that it describes rather than explains
  what is happening.
 However, recent studies have clarified this point - it appears
  that deeper coding produces better retention because it is
  more elaborate.
 Elaborative encoding enriches the memory representation of
  an item by activating many aspects of its meaning and
  linking it into the pre-existing network of semantic
 Deep level semantic coding tends to be more elaborated
  than shallow physical coding and this is probably why it
  worked better.
LAQ: Compare & contrast two models of
one cognitive process
i.e. Give the similarities and differences
between Multi-Store and Levels of
Processing models of memory
SAQ: Compare & contrast two models of
one cognitive process
i.e. Give the similarities and differences
between Multi-Store and Levels of
Processing models of memory
Similarities between the two models
 Similarities between the two models (comparison)
 -they are both models used to explain memory,
  both use information processing approach, both
  have experimental research support (give examples),
  both have support from experiments, both have
  weaknesses in terms of the research – e.g.
  Ecological validity,, both are too simple, they both
  support the key principles of the LOA
     The Models: What is different?

1.    The MSM focuses on the structure of memory, and makes a
      clear distinction between STM & LTM & LOP focuses on the
      depth of processing as what determines retention
2.    LOP does not make the distinction between LTM & STM, and
      the MSM proposes that STM & LTM are distinct separate stores
      of memory, and are different to each other in terms of encoding,
      capacity & duration
3.    The MSM model is linear, stating that STM is limited in capacity
      & duration, and rehearsal leads to a transfer from STM to LTM
4.    MSM states that rehearsal is the way in which information is
      transferred from STM to LTM, but LOP focuses on depth of
      processing on determining the length of retention
     The Models: What is similar?
1.   Both models of memory which are influenced by the computer metaphor and the both
     come from the information possessing approach and were developed around the same
2.   Both models are based on the key principles of the level of analysis, that ‘models of
     psychological processes can be proposed’ and ‘cognitive processes actively organize
     and manipulate information that we receive’
3.   Both seek to explain how memory works, both offer explanations of why some
     information is retained for longer than others, both have given us insight into the
     cognitive processes involved in memory
4.   Both suggest that encoding has an influence on retention – LOP – if processing is deep
     = semantic = longer retention & acoustic = shallow = short retention (Elias & Perfetti,
     1973) in MSM - LTM encoding is semantic and STM is acoustic (Baddeley, 1966)
5.   Both suggest that rehearsal is important – MSM – transfer from STM to LTM, and the
     LOP makes the distinction between ‘maintenance and elaborative’ rehearsal
6.   Both are limited in their ability to explain how memory works, and fail to account for
     the complexity of human behavior
7.   They both take into account factors that the other model ignores – LOP ignores
     distinction between STM & LTM – MSM – does not take into account the significance
     of depth of processing
     The research
          supporting the models: What is similar?
1.    Both models have experimental research support, with the use of
      controlled environments and experimental designs to see
      examine causal relationships between the manipulation of the IV
      on the DV
2.    This similarity between the research can be seen through
      comparing Baddeley’s (1966) study of the encoding in STM &
      LTM supporting the MSM model, and Hyde & Jenkins (1973)
      study of the effects of the depth at which words are processed on
      recall which supports the LOP model, - both studies required
      participants to recall word lists, both were carried out in
      controlled conditions, both had experimental designs, both can
      be criticized for having low ecological validity.
3.    Nevertheless, both studies support their respective models, Hyde
      & Jenkins (1973) showed that depth of processing does influence
      recall – supporting LOP, and Baddeley’s study suggests that STM
      encoding is acoustic & LTM is semantic – supporting MSM
     The research
          supporting the models: What is different?
1.    The MSM model has additional support from case studies of
      brain damaged individuals such as HM (Milner, 1966) who
      suffered from anterograde amnesia, and was unable to transfer
      information from STM to LTM. Furthermore, Shallice &
      Warrington’s (1970) case study of KF who had a severely impaired
      STM after a motorcycle accident – retrograde amnesia. This
      strengthens the validity of the MSM model, suggesting that there
      is indeed two distinct separate stores of memory (STM & LTM)
2.    The LOP model also has difficulties explaining certain the
      findings of studies which support the MSM model, for example,
      Glanzer & Cunitz (1966) carried out an experiment on the serial
      position effect, which supports MSM suggesting that there are
      two separate stores of memory, but LOP fails to offer an
      explanation for this phenomenon.
3.    However, MSM is also limited, as it is unable to explain the
      findings of Hyde & Jenkins (1973) study which clearly suggest that
      depth of processing is important, thus supporting LOP
     The evaluation of the research & models: What is
1.    Both are too simplistic, and fail the both to take into account
      what the other model does
2.    The experimental research that supports both models has
      methodological issues such as low ecological validity, as much
      of the research is carried out in an artificial environment which
      is very different to real life situations
3.    The experimental research supporting both models have few
      ethical issues, but its vital for such studies to closely follow
      APA ethical guidelines
4.    Both models have supporting research which is easily replicable,
      it can and should be replicated cross culturally to ensure that
      the models are valid
5.    They both have practical applications: MSM – highlights the
      importance of rehearsal to transfer information to LTM, and
      ‘chunking’ to enhance the capacity of STM, furthermore, LOP
      suggests that when studying ‘deep’ processing tasks will lead to
      longer retention.
     The evaluation of the research & models: What is
1.    The research that supports the LOP, showing that depth of
      processing is important in retention, is limited by the
      possibility of confounding variables of processing effort and
      time, however, the research supporting the MSM does not have
      such problems
2.    The LOP model has a major problem in terms of its ‘circularity’,
      as the model predicts that deep processing will lead to better
      retention, and researchers conclude that because better
      retention is due to certain processing tasks, they must involve
      ‘deep’ processing. On the other hand the MSM model does not
      have such problems.
3.    However, Craik & Lockhart (1986) have been open to adapting
      the LOP model, and did integrate ‘elaboration &
      distinctiveness’ of information as crucial factors influencing
      retention – and in contrast the MSM fails to account for how
      more distinctive information can be retained for longer
Word bank:
         However…
         On the other hand…
         In contrast…
         In comparison…
         Conversely…
         A similarity is…
         A difference is..
         Moreover…

Shared By:
Lingjuan Ma Lingjuan Ma