The Representation of Information by pengxuebo

VIEWS: 4 PAGES: 83

									The Representation
of Information
Visuospatial and Knowledge
Representation
Visuospatial
Representation
Spatial Knowledge, Imagery,
Visual Memory
Representation
 What is a representation?
 Four aspects of representation
     The  represented world
     The representing world
     Set of informational relations on how the two
      correspond
     Set of processes that extract and use
      information from the representation
Meaning
   Mental representations are carriers of meaning
   In order to interact appropriately with the
    environment we represent info from it and
    manipulate those representations
   Correspondence
     Meaning  derived from how representation stands in
      consistent relation to the represented world
   Conceptual
     Meaning  determined by relations to other
      representations
Spatial Knowledge
 How we represent and use spatial
  information
 Separate from strictly verbal knowledge
     Semantic   propositions
   Dependent on the linear dimension of
    space.
Spatial Cognition
 How is the representing world like the
  represented world?
 The represented world is a space
 The representing world is a space
     What   kinds of processes might be involved?
Space as a representation
   Spatial representation
   Representing world is a space. What is a space?
       Geometric entity in which locations are specified relative to a set
        of axes
       Dimensionality defined by the number of axes that can point in
        independent directions
       Of interest is the distance between items, which can be
        measured in different ways
           Euclidian
                  Straight line
                  Non-independent dimensions
                      Saturation and brightness
            City-block
                  Distinct dimensions
                      Color and size
Space as a representation
   Physical world experienced (at least
    perceptually) has three dimensions (+ time)
   However, the representing world is not confined
    to any number of dimensions
   Represented world does not need to be spatial
     Conceptual info can be represented spatially
     More on that later
Spatial Representation
   Analog representation
     Representation mimics the structure of the
      represented world
     Multidimensional scaling
   Propositional
     Abstract  assertions regarding the state of the
      represented world
     Not tied to a particular sensory modality
Example: Multidimensional Scaling
(MDS)
 MDS
     Mathematical technique for taking a set of distances and finding the best-fitting
      spatial configuration that corresponds to those distances
 Input: a distance or proximity matrix that describes how close every
  object in a set is to every other object
     N objects are represented by N(N-1)/2 numbers (distances)
 Output: a geometric representation where every object is
  represented as a point in D-dimensional space
     Each object is represented as a point in space
     N objects are represented by ND numbers (coordinates)
 Purposes of MDS
     Give psychological interpretations to the dimensions
     Reveal the dimensionality of a data set
MDS




 Difficult to get a sense of relative distance by means of this information
MDS




  MDS recovers absolute original locations for the objects from the
  distances
  Flipping on horizontal axis would give us a rough approximation of
  NSEW
  Analog representation
Propositional Representation
                   (A,B) 10 miles east
                   (E,C) 20 miles south,
                    10 miles east
                   (F,D) 10 miles south,
                    10 miles west
Analog vs. Propositional
   Analog
       Good for configural info
       Easy incorporation of new info
   Propositional
       Time-consuming
       Lots of info must be represented
            E.g. one point added may require many propositions
       Allows for communication of spatial knowledge and incorporation
        of additional information not related to distance
            Going south on I35 from OK, one must pass through Denton to get
             to either Fort Worth or Dallas
Cognitive Maps
   Where is Seattle?
   Where is Terrill Hall?

   Large vs. small-scale space
     Maps of small-scale (navigable        space)
        Cognitive geography

     Maps of large-scale space
        What is our sense of the locations of items in the world?

   Hierarchical representation
Small scale space
   Survey knowledge
       Bird‟s eye view (map knowledge)
       Good for global spatial relations
       Easy acquisition
       Not so great for orientation
   Route knowledge
       Gained from navigating through the environment
            Locate landmarks and routes within a general frame of reference
            Landmark knowledge
                  Salient points of reference in the environment
     More difficult to acquire but better for navigation in irregular
      environments
     May lead to survey knowledge
            Perhaps a different type
            Cognitive collage vs. orientation free
Large scale space
   Which is farther north:
        Denton, TX or Chicago, IL?
        Portland, OR or Portland, ME?
   Hierarchical representation of locations

   Relative locations of smaller regions are determined with respect to
    larger regions.
        States are superordinate to cities, countries superordinate to states
   USA is south of Canada
        Maine is just south of Canada
        Oregon is well south of Canada
   Oregon must be south of Maine
        Cities in Oregon must be south of cities in Maine
        In this case such cognitive economy works against us
             Portland OR is north of Portland ME
Hierarchical representations
   Judge relative position of
    cities (Stevens and
    Coupe)
   When superordinate info
    congruent with question,
    performance better
       Is x north of y when one of
        right side maps presented
Using spatial cognition
   Adaptive context
       Locating and way finding
       Tool use
       Mental rotation and mental movement
   Symbolic representations of space
       Drawings, maps, models
       Spatial language
   Thinking
       Transitive reasoning
            A > B, B > C
            A?C
       Metaphor
       Problem-solving and creativity
       Taking someone else‟s point of view?
Imagery
   Some information in memory is purely verbal
     Who   wrote the Gettysburg address?
   Other memories seem to involve mental images
     Trying to recall a procedure
     Making novel comparisons of    visual items
   What is a mental image?
     How  are mental images represented and processed?
     Are mental images like visual images?
Evidence for use of visual imagery

   Selective interference
     Segal  & Fusella
     Imagery interferes with detection of stimuli
      (sensitivity decreased)
     Auditory imagery interfered with auditory
      detection, visual imagery with visual stimuli
   Manipulation of images
     Mental   rotation studies
Evidence for use of visual imagery
   Kosslyn
       Learn a map
       Mentally travel from one
        point to another
       Measure time to make
        this mental trip
       Results
            Time to make trip
             increases with distance
             between points
            Times increase with
             increase in the imagined
             size of the map.
Evidence for use of visual imagery
   Moyer 1973

   Subjects were given the
    names of two common animals
    and asked to judge which was
    larger
      Which is larger, a moose or a
       roach?
      Wolf or Lion?
   The time delays as a function
    of size difference were similar
    to those usually found for
    perceptual judgments.
Are visual images visual?
   Plenty of evidence to suggest a spatial component to
    visual imagery, but perhaps the visual part is
    represented propositionally
   Kerr
       Congenitally blind also take longer to imagine longer map routes
        like the one in Kosslyn
   Images are also not as sharp as real pictures
       Form a mental image of a tiger
            Does it have stripes?
                  How many?
   It is hard to examine details of mental images that would
    require eye movements
Paivio's Dual-Coding Theory
   Information is mentally represented either in a verbal
    system (propositional) or a nonverbal (analogical)
    system (or both).
      Each system contains different kinds of information.
      Each concept is connected to other related concepts
       in the same system and the other system.
      Activating any one concept also leads to activation of
       closely related concepts.
Santa 1977
   Some evidence of dual
    coding
   Ss presented array of
    objects or words
   On test presentation
    asked whether the
    elements were same as
    studied
       E.g. In geometric
        condition first two would
        be yes responses
Santa 1977
   Results of positive
    responses
   Spatial configuration is
    preserved in geometric
    encoding
   Compared to verbal
    presentation, which was
    encoded in typical
    English reading style
    and benefited from the
    linear configuration
Are visual images visual?
   Evidence from neuroscience
   Patients with lesions of visual
    cortex that lead to perceptual
    problems also have problems
    with mental imagery
   ERP evidence PET evidence:
    Visual imagery leads to
    activation of visual cortex.
        Auditory imagery does not
   In general, results of studies
    from mental rotation to brain
    imaging support the idea of
    both visual and spatial
    representation of images
     Translating Words to images

   Franklin and Tversky
   Create a mental image based on the
    description
   Asked to identify location of items in
    that imagined environment based on
    a given orientation
   Results are what one might expect
    given an imagined spatial
    environment
      Up-down, front-back more relevant
       in navigating real world
      Left-right confusion in real world and
       imagined world
    Visual memory

   Although our visual
    memory seems to be
    excellent, it turns out
    not to be that great in
    many respects

   In general, our memory
    for details is lost, much
    like with other types of
    memory
Visual memory
   Memory for pictures is quite good
    generally
     Again,don‟t get too detailed
     Standing (1973)
        Presented 10000 photos over several days
        Old-New memory over 80%

   Picture superiority effect
     Better   memory for pictures than words
Knowledge
Representation
Knowledge representation
 Spatial Representation
 Featural Representations
 Semantic Networks
 Structured Representations
Space as a representation
   Spatial representation
   Representing world is a space
       Geometric entity in which locations are specified relative to a set
        of axes
       Dimensionality defined by the number of axes that can point in
        individual directions
       Of interest is the distance between items
           Euclidian (non-independent dimensions)

           City-block (distinct dimensions)

   Represented world does not need to be spatial
       E.g. conceptual info can be represented spatially
MDS (morse code confusability)




  Rothkopf (1957) played pairs of signals and
                                                Any patterns?
  asked people whether they were the
  same or different

http://voteview.com/ideal_point_morse_code_data.htm
                                             Rothkopf (1957)
                                                                                                  Three
                                                                      One Tone




                                          Object Points

                                                                                            Two
                                          Common Space
              0.75

                              Q
                                             J               O
                                                              G
                                                                                T
                                                                                E
                                                                                                          Four
              0.50                    Z
                          P

              0.25            Y                                                     I
Dimension 2




                                  C                           W                         A

              0.00
                                                                  R
                                                 F                                  M


              -0.25   X               L                                             N
                                                 K       D
                                                                      U
                                                                           S
              -0.50           B

                                                     V            H
              -0.75
                                  -0.5               0.0                  0.5
                                                 Dimension 1
Rothkopf (1957)
                                           Three   Circle: only dashes (black) or
                                                   more dashes (purple)
                 One Tone




                                                           X C Z A P & N have equal
                                                           numbers




                             Two
                                                   Four
Box: only dots (black) or more dots than
dashes (red)
Using space as a representation
   Distance = Speed of Response
     Semantic distance
     Robin is a bird
     Goose is a bird
   Problems
       No explanation of false
        responses
            Technically everything would
             be some distance from
             another
       Why necessarily would further
        distance  slower response
Spatial models
   Use (in one form or another) is widespread
    in psychological models
     Encompass     the notion of mental proximity
     Allow for mathematical modeling of
      psychological phenomena
     Meaning and structure can be derived despite
      lack of detail
Spatial models
   Weaknesses
       Difficult to incorporate context of the situation which may alter a
        spatial representation
       Similarity judgments can lead to situations in which distance
        alone would be unable to account for result
           Lamp:Moon (give light, bright)
           Ball:Moon (round)
           Ball:Lamp (?)
       Cognitive processes often need to know not only whether things
        might be similar but also how that similarity is determined
           Spatial models do not have symbols representing the
            properties of objects
           Space is continuous
Featural representations
 Features are symbols in mental
  representations
 Two properties
 Discrete, unlike spatial representations
     Allows a process to access specific aspects of
      the representations
   Less inherent structure than spatial reps
Featural representations
   Two important processes involved in featural
    models
   One is the method by which features are to be
    used to describe an item in the represented
    world
     Determine which features are available
     Specify which ones will be chosen to represent   the
      item
   The other is some sort of comparison process to
    distinguish items in the represented world
Comparison of feature sets
   Dissimilar pair: little overlap   Similar pair: much overlap
Feature Comparison Model
 Example: Smith, Shoben, & Rips
  (1974)
 Concepts represented by a listing of
  features (one-element characteristics)
 Defining feature
     Essential   to defining a concept
   Characteristic feature
     Common   to the meaning of a concept but
      not essential
          BIRD                         CANARY

DEFINING    characteristic   DEFINING     characteristic
FEATHERS          small       SMALL             yellow
  BEAK             flies       SINGS            caged
 WINGS            sings       WINGS
LAYS EGGS        migrates    LAYS EGGS
      CHICKEN                           BAT

DEFINING    characteristic   DEFINING     characteristic
LAYS EGGS        barnyard     SMALL             “blind”
CAN‟T FLY   white/brown       WINGS           nocturnal
 CLUCKS      eggs/meat         FUR              rabies
FEATHERS                     NO EGGS          “vampires”
   Feature Comparison Model
                             Verify: “An A is a B”

                               COMPARE ALL
                                FEATURES


       Fast no      Low          FEATURE             High
                                                                Fast yes
A robin is a bulldozer           OVERLAP                     A robin is a bird
                                  SCORE


                                  Intermediate
                  Mismatch                       Match
                                 COMPARE
  Slow no                                                       Slow yes
                                 DEFINING
A bat is a bird                  FEATURES                   A chicken is a bird
Featural Models vs. Spatial
Models
   Spatial models have difficulty in accounting for how
    similarity judgments made for various items correlate
    positively with number of shared features, as well as
    correlate negatively with number of distinctive features
    among pairs of items
   Also, because of the continuous representation, spatial
    models cannot account for how people can report the
    discrete features shared or not among items
       i.e. how does a spatial model account for discrete properties of
        items?
Featural Models
   Strengths:
   Consists of discrete elements that can be accessed, reported and used
      Features vs. Distances
   Can account for some problems seen in spatial models
      Moon, ball, lamp comparisons
      Explains typicality effect
            Quicker to verify more typical members
                   A carrot is a vegetable           fast
                   An endive is a vegetable          slow
            Typical: Stage 1 Only
            Atypical: Stage 1 + Stage 2
   Like spatial model, still consists of primarily simple processes for
    representation (cognitive economy)
   Evidence from neuroscience suggests feature extraction by visual system
Feature Comparison Model
   Weaknesses
   Features
       No real method for determining which are defining & which are
        characteristic features
   Generality
       Difficult to extend beyond sentence verification task
   Structure
       Lacks the structure to distinguish between “A robin is a bird” vs. “A bird
        is a robin”
   Parts vs. wholes
       Some comparisons of mental representation require attention to the
        configural relations among features rather than just the features
        themselves
Semantic Network Model
   Connections of nodes (concepts) by relational links
       Beginnings: Collins & Quinlan (1969, 1972)
   Propositions
       Smallest unit of meaning about which one can assert its truth or
        falsity
   Initially assumed a hierarchical structure
       More general concepts connected to more specific ones through
        class inclusion links
   However that could not explain certain findings
       Typicality effect
            Can identify canary as a bird more quickly than ostrich as a bird
            Solution: have links of different strengths
       Hierarchy is not always followed
            Ostrich is a bird, longer to verify than ostrich is an animal
Spreading Activation Model
From Collins & Loftus (1975)
                         Animal
    Bark                                   Domesticated           Attention

                                                 isa
                          isa
               has                                         needs
  4 Legs                                         Pet
             has                        isa               needs
                         Dog                                       Feeding

                                                       Herding
                   isa            isa

                                        Collie   enjoys
           Buddy         isa            cross
                                                 has       Medium
                                                           Size
Spreading activation
   Activation spreads across
    the network of linked
    memory nodes
    (concepts)
   Associative priming
       Nonconscious priming of
        knowledge through
        spreading activation
       Example: are pairs words
        or not?
            Respond no even if just one
             is a non-word
PDP model
   The spreading activation
    concept can also apply to
    neural net models
   Not so much which nodes are
    activated, but instead it is the
    pattern of activation that
    represents a concept
      Same set of nodes
        represent all concepts in
        memory
      Excitatory and inhibitory
        connections between the
        nodes (neurons)
      Input and output nodes
Spreading activation
   Factors controlling the spread of activation are the
    strength of the links and the number of links connected
    to a particular node
   Strength may be affected by a number of factors e.g.
    typicality (stronger link between robin and bird vs. ostrich
    and bird), repeated pairings in environment etc.
   Second, total activation is spread across all the links
       Keeps all nodes in the network from immediately „lighting up‟ just
        because one concept is
Spreading activation models
   Can explain
       Typicality effects
       Frequency effects
            Increases strength of association with repeated presentations
       Fan effect
            Activation of a node is spread out across numerous exit points,
             leading to a longer time for other nodes to reach threshold level
            So some memories may take longer to retrieve because of more
             associations
            Paradox of the expert
                  With more associations this would lead to reduced activation reaching
                   any one particular node (fan effect)
                  Solution
                       Not only quantity increases with expertise, but also
                        interrelatedness
                  Add nodes that represent the accumulation of other nodes (ACT*) and
                   integrate their information
Semantic network models
   Allow for a straightforward explanation of how
    and memory content is accessed
   Spreading activation explains the
    interrelatedness of thought and how simple
    declarative sentences are understood
   Provides basis for unified theory of memory
     LTM is the culmination of all the links and nodes
      accumulated through experience
     New experiences lead to new connections and nodes
Semantic network models
   However other problems persist
   No real test for what information should be
    represented by a link or node
     E.g.   is-a, color-of, performs
   Spreading activation alone may not be able to
    account for complex problem-solving
     ACT  uses one type of network for memory and
      context, another set of rule-based processes for
      modeling reasoning
     Structured Representations
Structured representations
   Semantic networks are a type of structured
    representation
   is-a(chicken, bird)
     Chicken  and bird are constants, the is-a relation the
      predicate (has a truth value)
     Chicken and bird are arguments to the is-a relation
   The semantic networks discussed are restricted
    to binary relationships (2 argument/elements or
    nodes)
Frames
   A frame is another type of
    structured representation           Slots   Name: Name-1               Fillers
       Represents objects or
                                                Attribute-1: value-1
        events
                                                Attribute-2: procedure-1
   Slots and fillers
                                                Attribute-3: procedure-2
     Slots specify dimensions
      of variation of the                       .....
      concept represented by                    Attribute-n: value-m
      the frame
     Fillers are the specific
      way in which those roles
      are filled for that concept
            May have a default value
Frames
   Slots specify the relation
    between the concept               Compact Disc Player
    represented by the frame            Color: Black
    and the fillers                     Function: Play music
   Example: color slot                 Has-parts: buttons, volume control
       color(CD player, Black)         Used with: compact discs
       The slot specifies relation
        b/t arguments CD player
        and Black
   Allow for relations among
    slots (e.g. has-parts,
    function)
Frames
   Although can be thought largely in terms of binary relations, frames are not limited to
    them
        Attribute (single argument)
              tall(John)
        May have more than two arguments
              giving(John,Mary,present)
   Arguments are not limited to constants, i.e. relations can take on other relations as
    arguments



                                                                 2nd order



                                                                             1st order
    Hierarchy of frames

   Frames are typically arranged in a hierarchy in
    which “lower” frames can inherit values from
    “higher” frames in the hierarchy.
   Properties and procedures for “higher” frames
    are more or less fixed whereas “lower” frames
    may be filled with more contingent information.

                     Machine            Superclass


                     Computer           Class


              Dell              Mac     Instances
Structured representations
   Structured representations are more complex than the
    spatial and featural models presented before
   They contain explicit links between their arguments, and
    such connections must be taken into account by the
    processes acting on those representations
       Structural alignment
       Production systems
   Structural alignment is a method for comparing pairs of
    structured representations
   Production systems utilize structural representations for
    carrying out complex activity
Production Systems
   A frame system also attempts to integrate
    procedural notions about how to retrieve
    information and achieve goals
   Example of a production system
     Condition  (if)                  IF location (?agent, edge(?street)) and
     Action (then)                       not(busy(?street))
                                       THEN cross(?agent, ?street)
     ?s denote variables
          Take on some value
           based on current contents
           of working memory
Use of structured representations
   Perception
      E.g. Biederman‟s recognition by components model
      Object representations consist of geons and are combined using
       spatial relations among the geons
   Language and reasoning
      Verbs themselves involve structured representations whose
       relations are specified by syntax
            So we can go from John loves Mary to Mary loves John
       Understanding stories
            Retain gist, extract meaning
            Knowledge structure allows us to go beyond information provided in simple
             stories
                  Mike went to the Oriental Garden and ordered some food. He ate it and left.
                  Schemas
Structured representations
   Strengths
       Contain explicit information about relations among elements
       Allows for flexibility and complexity among elements and
        relations
       Provides models for relational information
           Structural alignment, production systems

   Weakness
       Computationally complex and time consuming
          E.g. complex problem solving
       Not so good for modeling cognitive processes that operate
        quickly
          Low level perception, attention
Dynamical Cognitive
Psychology
Anti-representationalist arguments
   The previous explanations for representation are
    steeped in the view of the computational mind
       Brain as computational device (like a computer) storing
        information in some form of representation
   Problem
     Although successful, we are still very much in the dark
      about exactly how representation actually occurs, even at
      simplest levels
     If computational view is correct, one would think we‟d have
      more accurate ways to model such representation
            Still no „intelligent‟ machines in 50 years of progress
   Other views
       Situated action
       Dynamical systems
Situated action
   Situated action
     Cognitiveprocessing cannot be separated from the
      environment in which it takes place
          Meanwhile most research in cog sci looks only at what‟s
           going on internally with regard to the single problem solver
                E.g. problem solving
                    Insight, applying previous solutions to current problem
   From the SA perspective, knowledge is
    constructed in response to a situation by an
    agent
     Behavior       is contextualized
Situated action
   How a problem might be solved will be represented
    differently according to the environment in which it must
    be solved and the tools which are available to solve it
   Problem to be solved might be different than the abstract
    representation of it
   In terms of meaning:
       “Thus, depending on the context, a Coke bottle can be used to
        quench thirst, or as a weapon, a doorstop, or a vase. That is, its
        meaning depends on the context.” Glenberg, 1997
   Not necessary to abandon representations as presented
    per se but some proponents of the situated action view
    suggest so
Dynamical Systems approach
   The cognitive system is not a discrete sequential
    manipulator of static representational structures; rather,
    it is a structure of mutually and simultaneously
    influencing change.
   Cognitive processes do not take place in the arbitrary,
    discrete time of computer steps; rather, they unfold in the
    real time of ongoing change in the environment, the
    body, and the nervous system.
   Most of the approaches in the social sciences focus on
    commonalities among individuals
       Everything else is “error”
   DS suggests there is meaning to be found in this “noise”
    (individual differences), and that it should be
    incorporated in any model of cognition
Dynamical Cognitive Hypothesis
   The dynamical approach at its core is the application of
    the mathematical tools of dynamics to the study of
    cognition.
   Natural cognitive systems are dynamical systems, and
    are best understood from the perspective of dynamics.
   Perhaps the most distinctive feature of dynamical
    systems theory is that it provides a geometric form of
    understanding
       Behaviors are thought of in terms of locations, paths, and
        landscapes in the phase space of the system
Terminology
   System
        A set of interacting and changing aspects of the world
   State
        How the system is at a given time
   State space
        The totality of all the states the system might take on
   Trajectory
        A curve connecting temporally successive points in a state space.
   Attractor
        Limit sets to which all nearby trajectories tend towards.
   Basin
        A region of the state space containing all trajectories which tend to a
         given attractor
   Behavior
        The change in the system over time
        Sequence of points in state space
Dynamical systems
   Dynamical systems
       Systems with numerical states that change over time
       Real dynamical system
          Any concrete object that changes over time
       Mathematical dynamical system
          An abstract structure which can be used to describe the
           change of a real system through a series of states
   To say that cognitive systems are dynamical systems
    means that:
       A cognitive system is a real dynamical system
       This system instantiates some mathematical dynamical system
        that we can study to explain the properties of the real system
Chaos theory
   Chaos theory describes complex systems, i.e.
    those whose parts are highly interconnected
     May be essentially unpredictable
      (eg complex weather systems Lorenz, 1963)
     Minute input changes may have big effects (“butterfly
      effect”)
     Self-adjust to “steady states”
     Sometimes have “catastrophes” (eg avalanche)

   Lawful systems can be unpredictable
Example: Anxiety and performance
   The classic model
       Yerkes-Dodson‟s inverted
        U
   As arousal increases
    initially, alertness goes up
    increasing performance
   With too much arousal,
    performance suffers
       Example: test taking
   However the model is too
    simplistic (though still
    widely adhered to)
   The task involved, other
    physical factors (e.g.
    caffeine, sleep), other
    environmental factors etc.
    can have an impact on
    how arousal and
    performance relate
A cusp catastrophe model of anxiety,
performance, and cognitive worry




       If cognitive anxiety is low, then the
        performance effects of physiological arousal
        will be low; but if it is high, the effects will be
        large and sudden.
Low Cognitive Anxiety
Moderate Cognitive Anxiety




                             Performance
          Recovery               Drop
            Path
High Cognitive Anxiety




                         Performance
      Recovery               Drop
        Path
Dynamical Systems Approach
   In terms of representation, the DS explanation can be contrasted with the
    computational perspective outlined throughout this lecture
   The traditional view posits cognitive systems that act on knowledge that is
    stored in some form i.e. represented
        Symbols are manipulated
        Manipulations are computational in nature
   The DS approach can model processes without speaking directly to
    representation, though it isn‟t the case that representation cannot be
    incorporated
        E.g. a particular state may be a representation
   The computational view suggests that the rules governing behavior of the
    system are defined over the entities that have representational status
   The DS view is that the rules are defined over numerical states
        E.g. recalling or recognizing an item might be a matter of a process settling into
         its attractor
Representation summary
 The older models are still viable as
  explanations for cognitive processing
 Newer approaches arose as a challenge
  that reflected current research into how
  the mind works
 May be that a combination of the
  computational approach and its
  alternatives may yield the best explanation

								
To top