answering by xiangpeng

VIEWS: 22 PAGES: 38

									    Answering WHY questions in
         Closed Domain from a
               Discourse Model

Rodolfo Delmonte
University Ca’ Foscari - Venice
Emanuele Pianta
FBK - Trento
    OUTLINE
   Representing a Discourse Model
       Inds, Sets, Class, Infons, Locs, Card
   Building a Discourse Model with GETARUNS
       System Architecture
       Asserting Discourse Entities
   Questions/Answering from a DM
       Entity properties pool
       Wh-questions
       Yes-No questions
       Why-questions
    Discourse Model
   A set of entities and relation between them, as
    “specified” in a discourse.
   Discourse Entities can be used as Discourse
    Referents.
   Entities and relation in a Discourse Model can be
    interpreted as representations of the cognitive objects
    of a mental model (cfr. Johnson-Laird)
   Representation inspired to Situation Semantics.
   Implemented as prolog facts.
Representing a Discourse Model
   Any piece of information is added to the DM as an infon.
   An infon consists of a relation name, its arguments, a
    polarity (yes/no), and a couple of indexes anchoring the
    relation to a spatio-temporal location.
       EX: meet, (arg1:john, arg2:mary), yes, 22-sept-2008, venice

   Each infon has a unique identifier and can be referred
    to by other infons.
   Infons are implemented as prolog facts (infon/6)
      EX: infon(1, meet, [john, mary], 1, 22-sept-2008,
       venice).
Kinds of Infons
   Full infons
      Situations: sit/6

      Facts: fact/6

      Complex infons: have other sit/fact as argument


   Simplified infons
      Entities: ind/2, set/2, class/2

      Cardinalities: card/3

      Membership: in/3

      Spatio-temporal rels: includes/2, during/2, …
Entities, Cardinalities, Membership
   Entities are represented in the DM without any
    commitment about their “existence” in reality.
       Individual entities (“John”): ind(infon1, id5).
       Extensional plural entities (“his kids”): set(infon2, id6).
       Intensional plural entities (“lions”): class(…, id7).

   Cardinality (only for sets: “four kids”)
       card(…, id6, 5).

   Membership (between individual and sets: “one of
    them”)
       in(…, id5, id6).
State of Affairs: sit vs fact
   A sit is an abstraction (or representation or mental construct):
    no commitment is made with regard to correspondence with reality
        May be used to represents objects of propositional attitudes: “He
         would like to sleep”, “When she sleeps, she’s happy”

   A fact is like a sit, but accompanied with a commitment
    about correspondence to reality
        May be used to represent “objective” statements:          “He slept all
         night”
   Arguments of facts and sits are labeled by their semantic role:
        fact(inf4, sleep, [theme:id43], 1, time2, loc5)
 Spatio-temporal locations
 Infons are “situated” in spatio-temporal
  locations
 Related by a number of specific relations
       cfr. Allen temporal relations
 A special univ location is used to
  represent the universal location (including
  all other locations)
 EG: instance-of relations or generic
  statements are situated in univ locations
An Example
 “John slept in Venice”

 id(inf1, id1).
 id(inf2, loc1).
 name(inf3, id1, “John”).
 name(inf4, loc1, “Venice”).

 fact(inf3, sleep, [theme:id1], 1, time1, loc1).

 before(time1, speaker_time).
Complex Infons
“John wants to go to Venice tomorrow”
  id(inf1, id1).
  loc(inf2, loc1).
  name(inf3, id1, “John”).
  name(inf4, loc1, “Venice”).
  fact(inf3, want, [theme:id1, prop:inf4], 1, time1, loc2).
  sit(inf4, go, [theme:id1, goal:loc1], 1, time2, loc3).
  meets(time1, speaker_time).
  after(time2, time1).

  NB: loc2 and loc3 are left undefined
SEMANTIC PROCESSING
   Subdivision of Tasks
     Referring Expressions
     Clause Level Properties

   External Pronouns + Definite Expressions
       To check for disjointness
   Informational Structure at Propositional
    level
       Factitivity, Discourse Relations, Relevance,
        Subjectivity
SEMANTIC PROCESSING
   Spatiotemporal Locations
       Producing Main Spatial Location
   Updating Spatial Location
       Whenever a new location is asserted either
        as argument or adjunct of main clause
   Inferring same location
     Use of pronominal deictics
     Inferential processes to derive semantic
      relations
         • Meronimic or Hyponimic relations prevent update
SEMANTIC PROCESSING
   Spatiotemporal Locations
       Producing Main Temporal Location
   Updating Temporal Location
       Whenever a new location is asserted either as
        argument or adjunct of main clause
       This produces a new TIME FOCUS
       New temporal locations must be lexically expressed:
        tense is not sufficient and only constitutes a local
        temporal relation
   Inferring Same Location
       Use of pronominal deictics
       Inferential processes to derive semantic relations
         • Meronimic or Hyponimic relations prevent update
SEMANTIC PROCESSING
 Centering and Topic Hierarchy
 External Pronouns
       DPV may decide coreference
   Creation of New Semantic Ids
     Individuals (Ind) for singular new entities
     Sets (Set) for plural new entities

     Classes (Class) for generic new entities

     Locations (Loc) for spatiotemporal main
      locations
SEMANTIC PROCESSING
   Indefinite NPs
       Are treated as Ind if not in opaque contexts
   Zero or bare singular/plural NPs
       Are treated as Class or Sets (with a fixed
        cardinality of 5, or more if knowledge of the
        world is available) depending on whether they
        have an arbitrary or generic reading
         • Computed on the basis of tense, mood, modality,
           adjunct temporal modifiers, etc.
   Definite NPs
SEMANTIC PROCESSING

    Definite NPs
      if part of the scenery and belong to Mutual
       Knowledge or Generics or Common
       Knowledge of the World (see
       maintainance, instruction manuals)
      Collective, group singular Definite NPs are
       computed as sets with a given cardinality
       (the army)
SEMANTIC PROCESSING
   Logical Form Mapping from DAGs
       Conjoined wffs with syntactic indices
       One for each f-structure
       Recursively at clause level
   Eventuality davidsonian structure
       Tripartipe temporal structure
   Mapping from LF into situation semantic
    structures
       Conjoined wffs with semantic indices
       Recursively headed by a situation operator
    Entity Property Pool
 At the end of the computation each Entity
  has been associated to a certain number of
  properties and relations
 This is what we call the EPP which is
  created automatically by collecting all
  relations, properties and other facts from the
  discourse model which carry the same
  semantic ID or are included or include that
  ID.
    Question Answering
 Answering questions from the DM using the
  EPPs
 As a first step we produce a new DM for the
  question where the facts are labeled as
  q_facts
 Then we extract the main relation and the
  Focus attribute/s
 These attributes will determine the type of
  answer and the type of search in the EPPs
    Question Answering
 As a first step we look for a semantically
  similar/identical relation in the EPP
 Then according the question type we extract
  arguments or adjuncts of the main predicate
 Eventually we look for properties/attributed
  asserted in the question and try to match
  them with the properties associated to the
  current entity found, directly or inherited
    Question Answering
 The focus item is recovered by means of the
  instantiation of the variable associated to the
  following q_facts:
      q_fact(K,focus,[arg:Id],1,_,_),
      q_fact(_,isa,[_:Id,_:Focus],1,A,B),
 Where it can be noticed that polarity is
  forced to be equal to 1, that is positive
    Answer Generation
 The first predicate fired by the system is,
get_focus_arg(Focus, Pred, Args, Answer,
  True-NewFact),
 which will give back the contents of the
  answer in the variable Answer and the
  governing predicate in Pred. These are then
  used to generate the actual surface form of
  the answer. Args and True-NewFact are
  used in case the question is a complete or
  yes/no question.
    Answer Generation
   In order to generate the answer, tense and mood
    are searched in the DM; then a logical form is
    build as required by the generator, and the
    predicate build_reply is fired,

   get_focus_tense(T, M),
   Form=[Pred,T,M,P,[D]],
   build_reply(Out,Focus, Form),
   !.
This predicate will actually generate the answer.
      Answer Generation
   We will present general wh- questions at first. They
    include all types of factoid questions and also "How"
    questions. The main predicate looks for an appropriate
    linguistic description to substitute the wh- word
    argument position in the appropriate PAS,

get_focus_arg(who, Pred, Ind, D1, NewP):-
  q_getevents(A,Pred),
  q_fact(X,Pred,Args,1,_,L),
  q_role(Y,X,Z,Role),
  answer_buildarg(Role, Pred, [Idx:Z], D, Facts),
  select_from_pred(Pred,Role,Facts,NewP,D1),
  !.
      Answer Generation
   We use a different procedure in case the question
    governing predicate is a copulative verb, because we
    have to search for the associated property in the
    QDM, as follows,
   copulative(Pred),
     q_fact(X,Pred,[prop:Y],1,_,_),
     q_fact(Y,Prop,[_:K,Role:Type],1,_,_),
     q_fact(_,inst_of,[_:K,_:Z],P,T,S),
     q_get_ind_des(K,Propp,Ty),
   Copulative predicates have a proposition as their
    argument and the verb itself is not useful being
    semantically empty.
   Answer Generation
The predicate corresponding to the proposition is
 searched through the infon "Y" identifying the
 fact. When we have recovered the Role and
 the linguistic description of the property
 "Propp" indicated by the wh- question, we pass
 them to the following predicate and search the
 associated individual in the DM,
answer_buildarg(Role,Pred,[Idx:Propp],Answ
 er,Facts)
    Answer Generation
   Suppose the wh- question is a "Where" question
    with a copulative verb, the role will be a location and
    the Propp will be "in". "How" copulative questions
    will search for "class" properties, i.e. not for names
    or individuals,

    q_fact(X,how,[_:Y],1,_,_),
    q_fact(Q,isa,[_:K,class:Pred],1,_,_),
    q_fact(_,inst_of,[_:K,_:Z],P,T,S)


   Semantic roles are irrelevant in this latter case: the
    only indication we use for the search is a dummy
    "prop" role.
    Answer Generation
   On the contrary, when lexical verbs are governing
    predicates, we need to use the PAS and the
    semantic role associated to the missing argument
    to recover the appropriate answer in all other
    cases. Here we should also use different semantic
    strategy in case an argument is questioned and
    there is another argument expressed in the
    question – what, whom, who. Or else an adjunct is
    questioned – where, when, how, etc. – or the
    predicate is intransitive, an argument is questioned
    and there is no additional information available.
     Answer Generation
   Consider a typical search for the answer argument,

answer_buildarg(Role, Pred, Tops, Answer, Facts):-
 on(Ind:Prop, Tops),
 entity(Type,Id,Score,facts(Facts)),
 extract_properties(Type,Ind,Facts,Def,Num, NProp,Cat),
 select_allrole_facts(Role,Ind,Facts,Pred, PropLoc),
 Answer=[Def,nil,Num,NProp,Cat,PropLoc],
 !.
   Here, "extract_properties" checks for the appropriate
    semantic type and property by picking one entity and
    its properties at the time. When it succeeds, the
    choice is further checked and completed by the call
    "select_allrole_facts".
    Answer Generation
   extract_properties searches for individuals or sets
    filling a given semantic role in the predicate-
    argument structure associated to the governing
    predicate.
   In addition, it has the important task of setting
    functional and semantic features for the generator,
    like Gender and Number. This is paramount when a
    pronoun has to be generated instead of the actual
    basic linguistic description associated to a given
    semantic identifier. In particular, Gender may be
    already explicitely associated in the DM to the
    linguistic description of a given entity or it may be
    derived from WordNet or other linguistic processors
    that looks for derivational morphology.
     Answer Generation
   The call topichood_stack looks for static definiteness
    information associated to the linguistic description in the DM.
    Proper names are always "definite". On the contrary, common
    nouns may be used in definite or indefinite ways. This
    information may be modified by the dialogue intervening
    between user and system and be recorded in the user model.
    The decision is ultimately taken by "set_def" procedure which
    looks into the question-answering user model knowledge
    base where previous mentions of the same entity might have
    been recorded. Or else it does it - by means of
    update_user_model - to be used in further user-system
    interactions. If the entity semantic identifier is already present
    Def will be set to "definite", otherwise it will remain as it has
    been originally set in the DM.
Computing Answers to WHY
questions
   Why question are usually answered by events, i.e.
    complete propositions. They would in general
    constitute cases of rhetorical clause pairs labeled
    either as a Motivation-Effect or a Cause-Result. In
    their paper [Delmonte et al. 2007], causal relations
    have been further decomposed into the following
    finer-grained subprocesses:
       •   -   Cause-Result
       •   -   Rationale-Effect
       •   -   Purpose-Outcome
       •   -   Circumstance-Outcome
       •   -   Means-Outcome
Computing Answers to WHY
questions
   Consider now the pieces of knowledge needed to
    build the appropriate answer to the question "why is
    the tree called sugar maple tree?". Sentences
    involved to reconstruct the answer are,
               Maple syrup comes from sugar maple trees.
            At one time, maple syrup was used to make sugar.
            This is why the tree is called a "sugar" maple tree.

   In order to build the appropriate answer, the system
    should be able to build an adequate semantic
    representation for the discourse anaphora "This",
    which is used to relate the current sentence to the
    event chain of the previous sentence.
Computing Answers to WHY
questions
Eventually, the correct answer would be,
"Because maple syrup was used to make sugar"

which as can be easily gathered is the content of the
 previous complex sentence. Here below is the portion
 of the DM representation needed to reconstruct the
 answer,
ind(infon19, id8)
fact(infon20,inst_of,[ind:id8,class:edible_animal],1,univ, univ)
fact(infon21, isa,[ind:id8,class:[maple_syrup]],1, id1, id7)
set(infon23, id9)
card(infon24, id9, 5)
fact(infon25, sugar_maple, [ind:id10], 1, id1, id7)
fact(infon26, of, [arg:id10, specif:id9], 1, univ, univ)
fact(infon27,inst_of,[ind:id9,class:plant_life],1,univ, univ)
fact(infon28, isa, [ind:id9, class:tree], 1, id1, id7)
class(infon43, id13)
fact(infon44,inst_of,[ind:id13,class:substance],1,univ, univ)
fact(infon45, isa, [ind:id13, class:sugar], 1, id1, id7)
fact(id14,make,[agent:id8,theme_aff:id13],1, tes(finf_m3), id7)
fact(infon48,isa,[arg:id14,arg:ev],1,tes(finf_m3), id7)
fact(infon49, isa, [arg:id15, arg:tloc], 1, tes(finf_m3), id7)
fact(infon50, pres, [arg:id15], 1, tes(finf_m3), id7)
fact(infon51,time,[arg:id14,arg:id15], 1, tes(finf_m3), id7)
fact(id16,use,[theme_unaff:id8,prop:id14], 1, tes(sn5_m3), id7)
_______________________
fact(id21,call,[actor:id9, theme_bound:id9], 1, tes(f1_m4), id7)
ent(infon61, id18)
fact(infon62,prop,[arg:id18,
        disc_set:[id16:use:[theme_unaff:id8, prop:id14]]],
         1, id1, id7)
ind(infon63, id19)
fact(infon66, inst_of, [ind:id19, class:abstract], 1, univ, univ)
fact(infon67, isa, [ind:id19, class:reason], 1, id1, id7)
fact(infon81, in, [arg:id21, nil:id19], 1, tes(f1_m4), id7)
fact(infon83, reason, [nil:id18, arg:id19], 1, id1, id7)
fact(id23, be, [prop:infon83], 1, tes(sn10_m4), id7)
    Conclusions
 Answering from a Discourse Model is very
  precise and very simple
 Except for WHY question which requires
  special provision to encode event
  coreference
 The use of Semantic Roles is paramount…
 But it makes the machinery somewhat brittle

								
To top