Docstoc

crs

Document Sample
crs Powered By Docstoc
					Conceptual Role Semantics

        - And a model for
   cross-system translations –
   Presented by Ellie Hua Wang
           The background
• Traditional philosophy of language:
  meaning – symbol–world relation
• Frege-problem: Hesperus, Phosphorus
• Philosophy of mind: Content of thought -
  concept-world relation
• External grounding account vs.
  Conceptual web account
• Putnam-problem: the twin earth case
 Conceptual role semantics (CRS)
• The meanings of expressions (concepts)
  depend on their connections to each other
  in a system
• Defenders in philosophy:
           Wilfrid Sellars(1963),
           Ned Block (1986, 1999),
           Gilbert Harman (1974, 1987),
           and Michael Devitt, Brian Loar,
                 William Lycan, Hartry Field
Two theories for CRS in Philosophy
• Putnam-problem
• Two-factor semantic theory (Block 1986,
  Field 1977, Lycan 1984): a theory of
  meaning consists of two components, a
  theory of truth and CRS.
• One-factor (long arm) semantic theory
  (Harman 1987): the conceptual roles
  reach out into the world of referents
        Applications of CRS
• Linguistics - Saussure (1915, 1959) :
  concepts are negatively defined
• Philosophy of Science - Kuhn (1962) :
  incommensurability thesis
• Psychology - Barr and Caplan (1987),
  Goldstone (1993, 1995, 1996) : concepts
  are frequently characterized by their
  associative relations to other concepts
         More applications
• Computer science:

 - Lenat and Feigenbaum (1991)
 - Landauer and Dumais (1997) – Latent
    Semantic Analysis
 - Rapaport (2002) - SNePs
   External grounding account
• Are roles in conceptual network sufficient
  for meaning?
  – Symbol grounding problem
  – Putnam’s problem
• Arguments from Psychology and
  Computer Science
          Criticisms of CRS

• Concept identity - a problem with meaning
  holism
• Concept similarity
• The problems with the two theories
       Cross-system translation
         (Goldstone and Rogosky 2002)
• The notion of Conceptual Correspondence: two
  concepts correspond to each other if they play
  equivalent roles within their systems
• A neural network ‘ABSURDIST’ (Aligning
  Between Systems Using Relations Derived
  Inside Systems for Translation) provides a
  formal method for deciding conceptual
  correspondence across systems solely on the
  basis of relations between concepts within a
  system
              ABSURDIST
• Goal:
     - to illustrate the sufficiency of a
  conceptual web account for translating
  between systems
     - to indicate synergistic interactions
  between within-system information and
  extrinsic information
             ABSURDIST
• Does not connect concepts to the external
  world
• Is not to create rich translations between
  systems, but just explores the simplest
  representations of concepts
• Is not a complete model for meaning
• Is not a simulation on human translation
  ABSURDIST – input
two 2D proximity matrices
ABSURDIST-the algorithm
ABSURDIST-the algorithm cont.
    ABSURDIST - assessment
• The distances between every pair of elements
  within a system D (x , y) are computed by:
    ABSURDIST’s
tolerance to distortion
              • It also shows that the
                algorithm’s ability to
                recover correspondences
                generally increases as a
                function of the number of
                elements in each system,
                at least for small levels of
                noise.
              • The performance gradually
                deteriorates with added
                noise, but that the
                algorithm is robust to at
                least modest amounts of
                noise.
              • The performance is a lot
                better than chance
                performance.
       ABSURDIST’s
tolerance to distortion cont.
                  This graph reveals that
                  partially correct
                  translations are rarely
                  obtained. With
                  relatively few
                  exceptions, either
                  ABSURDIST finds all of
                  the correct
                  correspondences, or
                  finds none.
More graphs
          The number of
          iterations
          required for
          good
          performance is
          not appreciably
          affected by the
          number of items
          per system.
Different-size systems
               When different-
               sized systems are
               compared,
               ABSURDIST’s
               correspondences
               are still typically
               one-to-one, but
               not all elements of
               the larger system
               are placed in
               correspondence.
Subset matching
         ABSURDIST will draw
         correspondences
         between the three
         pairs of elements that
         share the majority of
         their roles in common,
         but not between the
         fourth, mismatching
         elements.
Indirect similarity relations
                   if two elements
                   within a system
                   enter into the
                   same set of
                   similarity relations,
                   they still may be
                   disambiguated
                   because all
                   correspondences
                   are worked out
                   simultaneously
Integrating internal and external
           information
                    One way to
                    incorporate extrinsic
                    biases into the system
                    is by initially seeding
                    correspondence units
                    with values.
                    The amount by which
                    translation accuracy
                    improves beyond the
                    amount predicted
                    generally increases as
                    a function of system
                    size.
Integrating internal and external
        information cont.
                  • Using only information
                    intrinsic to a system
                    results in better
                    correspondences than
                    using only extrinsic
                    information.
                  • The superior
                    performance of the
                    network that uses both
                    intrinsic and extrinsic
                    information derives
                    from its robustness in
                    the face of noise.
              Conclusions
• Translations between two systems can be
  found using only information about the
  relations between elements within a
  system.
• Intrinsic relations suffice to determine
  cross-system translations, but if extrinsic
  information is available, more robust,
  noise resistant translations can be found.
              Discussions
• The correspondence between concepts
  within an internal system and physically
  measurable elements of an external
  system
• Analytic and synthetic distinction
• Psychological distance determination
• The assumption of the model – similar
  structures of the systems

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:10/11/2011
language:English
pages:25