Machine Translation - Extra Notes

Document Sample
Machine Translation - Extra Notes Powered By Docstoc
					                     Machine Translation - Extra Notes
                                       Chris Mellish

1     Introduction to Machine Translation
In these days of global communication and markets, there is a huge demand for transla-
tion internationally (for instance, within the EC there are many documents that are legally
required to be available in all the languages of the Community). For the huge amount of
technical translation needed, Machine Translation is probably the only hope (There is rather
less demand for literary translation, which MT is rather bad at).

1.1   Varieties of MT
Machine Translation involves (semi-)automatically translating sentences from a source natural
language to a target language. Different systems, however, vary in the extent to which human
beings may play a role in this process, with the main possibilities being:

(FA)MT - (fully automatic) Machine Translation.

HAMT - Human Aided MT. Humans can aid in translation, for instance, by editing the
   source text to make it simpler or more regular, or by editing the target text to remove
   translation errors.

MAHT - Machine Aided Human Translation. Machines can aid human translators by giving
   them easy access to useful dictionaries, similar examples that have been translated
   before, etc.

Obviously FAMT is the most ambitious of these. In practice, almost all machine translation
has to be post-edited by humans (as, in fact, is almost all human translation), and so the
difference between FAMT and HAMT is rather blurred. Similarly the distinction between
HAMT and MAHT can be a rather subtle one, depending on who is seen to be “in control”
of the translation process.

1.2   Why MT is Hard
MT is hard because structures in one human language often do not correspond in a simple way
to structures in another. This is especially the case when the languages come from different
families (e.g. English and Chinese vs English and French). Here are some things that can

Information differences. One language may force more information to be expressed than
     another. For instance, to translate “the cat” into Spanish one would have to use “el
     gato” or “la gata”, according to the gender of the cat.

Ordering differences. One language may express things in a different order. For instance,
    “the lady likes the cheese” would be translated into French as “le fromage plait a la
    dame” (literally, “the cheese pleases to the lady”).

Packaging differences. One language may distribute information in a sentence in a different
    way than another. For instance, ”he swam across the river” would translate into the
                        e        e `
    French “il a travers´ la rivi`re a la nage” (literally, “he crossed the river by swimming”).

(these examples all from Barnett et al, 1994).
    In general, arbitrary world knowledge can be needed to produce a correct translation. For
instance, the English pronoun “it” translates into different pronouns in French or German
according to whether the noun it is associated with is masculine or feminine. To translate “it”
correctly every time, you would have to know what it refers to, which cannot be determined
just from the sentence. Consider, for instance:

      A hollow vertical cylinder, of radius 2a and height 3a, rests on a horizontal ta-
      ble, and a uniform rod is placed within it with its lower end resting on the
      circumference of the base...

Similarly, imagine translating “The baby is in the pen” into a language where the different
possible meanings of pen (play pen vs writing pen) have different words. What knowledge of
the world would you have to have in order to get this right?

1.3   Brief History
Serious research on MT began in the 1950’s, and much of this early work paved the way for
important later developments in AI and Computational Linguistics. But optimism was un-
realistically high, and with time the complexity of the linguistic problems became apparent.
In 1966 the US sponsors of MT research (a committee called ALPAC) published an influ-
ential report which concluded that MT was slower, less accurate and twice as expensive as
human translation. Since that time, of course, the cost and speed of computers have changed
dramatically, changing the equation.
    The ALPAC report had the consequence of bringing to the end much serious MT re-
search, especially in the US. However, the need for MT was greater in some countries (for
instance, Canada), and so work continued to some extent. In the 1970’s famous systems
like TAUM-METEO and SYSTRAN (described below) were developed. In the late 1980’s
and 1990’s, further MT products have come onto the market, though mostly using rather
simple techniques. In the early 1980’s the EC, which had been making important use of
SYSTRAN, initiated its own large MT research programme, EUROTRA. This has, however,
not yielded the practical systems that were hoped for, and, although providing a stimulus
for good theoretical work in Computational Linguistics, has largely disappeared from view.
Current interest in MT remains undiminished, however, with a considerable interest shown
by Japanese companies and with a large spoken-language translation project, VERBMOBIL,
currently underway in Germany.

1.4   State of the Art
   • There is only one example of a FAMT system that has been operational - a system called
     TAUM-METEO, which translated English weather reports into French in Canada).

      This system explicitly rejected about 20% of inputs which it deemed to be outside its
      competence, but was left unattended to deal with all other inputs.

   • The MT systems in most wide-scale use (“production MT systems” such as SYSTRAN
     and WEIDNER) are often word replacement systems with huge dictionaries of phrases
     and idioms. They often incorporate monolithic, incomprehensible programs for dealing
     with special cases, which have cost many years to develop and which are very expensive
     to maintain. But they are extensively used (e.g. by the EC and companies like GM and
     Xerox) and are effective (with human post-editing).

   • So far, linguistically-based MT systems (for instance, the EC’s large EUROTRA project)
     have only produced interesting research prototypes. It is unclear whether they will ever
     compete with systems like SYSTRAN.

   • Humans can get very proficient at postediting MT, as long as only about 20% of the
     words need to be changed. So MT can save time and money without being very good.

1.5   Translation Methods
Figure 1 is a traditional way of showing the major approaches to MT. The “pyramid” shows


                      Analysis                                   Generation


                                 Direct Translation
        Source Text                                                      Target Text

                                 Figure 1: Translation Methods

different ways of transforming a source text (on the left) to a target text (on the right). The
height of the approach indicates the depth of analysis needed for the source text (and the
complexity of the generation task for the target text). Thus direct translation involves
mapping from source to target at the level of individual words, which requires relatively
little analysis. Transfer involves finding some more abstract level of structure at which
the correspondance between the languages can be stated (perhaps syntactic structure). The
interlingua approach requires such a deep level of analysis that the representation produced
is language-independent, and so no mapping is required to go from it to the input of the
generation process for the target language.
     Basically, direct translation was the basis of the very early MT systems, whereas (with
systems like EUROTRA and the METAL system of the University of Texas, which is now a

commercial product) transfer-based systems were generally agreed to be the way forward in
the 1980’s and early 1990’s. Interlingua-based approaches were tried and rejected in the early
days, but are now experiencing a revival, given the increasing sophistication of AI knowledge
representation and inference techniques.

1.5.1     Direct Translation
The idea of direct translation is to translate from source to target without any intermediate
representation. At the extreme end, this would be “word to word” translation in the way that
one might do with a dictionary. More complex versions involve a stage of morphological
analysis for the input, which takes place before the bilingual dictionary look-up, and a stage
of local reordering afterwards. Morphological analysis is useful because one wants to be able
to recognise many forms of a word that may only appear in root form in the dictionary (e.g.
“write”, “writes”, “writing”, “writer”). Local reordering is supposed to take some account
of the grammar of the target language in putting the target words in the right order. The
following examples of Russian-English direct translation illustrate the problems with such an
        My trebuem mira
        Direct trans: We require world
        Correct trans: We want peace

                z                z
        Nam nu˘no mnogo uglja, ˘eleza, elektroenergii
        Direct trans: To us much coal is necessary, gland, electric power
        Correct trans: We need a lot of coal, iron and electricity
(problems with word order and translating ambiguous words are apparent).

1.5.2     Translation via Structural Transfer
There are many possibilities for the level at which one can perform “transfer” between the
source language analysis and the target language translation. Here we concentrate on transfer
at the level of syntactic structure, because this seems to be the basis of most existing transfer
systems and because it is relatively straightforward to describe.
     The idea of structural transfer is that the source sentence is analysed to give a tree
representing its phrase structure. There are then transfer rules specifying how to translate
this tree into a phrase structure tree for the target language. In general, one rule will specify
how the top level of structure is translated and then the whole set of rules will be applied
again to determine how the subtrees are dealt with. Once the target tree has been derived,
it is a simple task to actually generate the target sentence from it.
     Figure 2 shows an example transfer rule, designed to handle cases like the “the lady likes
the cheese” sentence above. The rule says how to translate a simple English sentence with
main verb “likes” (the English structure is to the left of the ===> and the French structure is
to the right). In this sentence, NP1 would be the structure of “the lady” and NP2 the structure
of “the cheese”. NP1’ and NP2’ in the French structure are supposed to be the translations
of these two subtrees, and how these are translated would have to be determined by other
transfer rules.
     Structural transfer obviously does not solve those translation problems that require seman-
tic knowledge (for instance, the translation of English pronouns into French, or the translation

                   S                                        S


         NP1                VP                   NP2’                VP

                    V                  NP2                   V                PP

                 likes                                    plait           P        NP1’


                                 Figure 2: Transfer: likes → plait

of a word that has several senses but only one syntactic category). However it does allow the
translation to capture important systematic differences in how sentences are built in the two
languages and to deal with words that may belong to more than one syntactic category.

1.5.3    Translation via an Interlingua
An interlingua is a language-independent representation that is used as an intermediary be-
tween sentences of two languages. In practice, this can only be achieved by it being a rep-
resentation of the meaning of the sentences. This representation needs to contain as much
information as is needed by any of the target languages that will be generated into. So, for
instance, if Spanish is among the target languages then the gender of any cat will have to be
represented. To a large extent, different languages reflect different cultures in the way they
carve up the world. For instance, the nearest equivalent to the English word “vegetable” in
Japanese is “yasai”, but this word includes things like mint, which would not be considered
vegetables, and it also does not cover carrots, which would. A cooking interlingua would
have to represent both of these concepts, as well as the relation between them. This is why
constructing an interlingua is very challenging, especially if the cultures are very different or
the domain is unrestricted.

1.6     Further Reading
   • Russell and Norvig, first part of section 23.1.

   • Hutchins, W. J. and Somers, H. L., An Introduction to Machine Translation, Academic
     Press, 1992.

   • Barnett, J., Mani, I. and Rich, E., “Reversible Machine Translation: What to do when
     the Languages Don’t Match Up”, in Strzalkowski, T., (Ed.), Reversibe Grammar in
     Natural Language Processing, Kluwer, 1994.

2      Software for Translation by Structural Transfer
The program file provides some basic support for writing MT systems using
structural transfer. To use it, you need to run Prolog, consult the file and also
consult a file of your own which define the grammars and lexicons of your two languages, as
well as the appropriate structural transfer rules. You can then use the predicate translate
to translate between source and target sentences, as in the following example:

      | ?- translate([he,eats,it],X).
      **Parsed OK
      **Transferred OK
      **Generated OK

      X = [he,aets,hit] ?

This example shows the English sentence “he eats it” being translated into the supposedly
equivalent Scots sentence “he aets hit”.
    The top level of the Prolog code in the file is a slightly more complex
version of the following:

      translate(In,Out) :-

The system analyses (parses) the source language sentence In to produce a parse tree Tree1
(using predicate parse). It then uses structural transfer rules to translate this into a parse
tree Tree2 for the target language. Finally it then uses the parse predicate “backwards” to
get the target sentence Out that corresponds to Tree2. Using the target language grammar
in the generation, rather than just producing the words at the leaves of the tree, ensures that
the result is indeed a legal target language sentence and that any agreements (for instance,
subjects with verbs) are correct.

2.1     What you have to Specify
To get translation going, you need to specify the following things, which are covered in the
next few subsections:

    • Grammar rules defining both the source and the target languages. In addition, you
      need to say what the “distinguished symbol” is for each language.

    • The lexicon for each language and how words in the source language translate into words
      of the target. This amounts to a kind of “bilingual lexicon” of the kind that is familiar
      from standard foreign language dictionaries.

    • The structural transfer rules which say how phrases larger than single words are trans-

2.1.1   Grammar Rules
You need to specify a grammar for both the source language and the target language (i.e. two
separate grammars are needed). For clarity, you need to distinguish between the categories
used in the two languages. For instance, French NPs and English NPs are different in structure
and hence these categories need different names. You could use names like french np and
english np. The grammar rules take the usual form of DCG rules, except that the --->
arrow has THREE hyphens in it. This is so that the rules can be treated specially by this
software, rather than being interpreted in the normal way for DCG rules. Here is a simple
rule for Scots NPs:

   scot_np ---> scot_det, scot_n.

Note that your grammars do not have to specify lexical entries if they are implied by the
bilingual lexicon (see below). In this respect, they are different from standard DCG grammars.
    For each of the two languages, you need to specify the “distinguished symbol” which
is the category representing the class of phrases that you wish to translate. Usually this
will be the equivalent of “sentence” in each language. You do this using the predicates
distinguished source and distinguished target. For instance,


for translating from English to French.

2.1.2   Parse Trees
The system translates your grammar rules into an internal form that allows a phrase structure
tree to be built automatically when a sentence is recognised (normally with DCGs you would
have to arrange this yourself by giving extra arguments to the predicates). Since in order to
write structural transfer rules you have to know how these trees are represented, this section
summarises the principles:

   • An individual word is represented by a list containing just that word, for instance

   • If a phrase is recognised by a rule of the form LHS ---> RHS then the parse tree will
     be of the form [LHS | RHTrees], where RHTrees is the list of Prolog representations
     of trees corresponding to the recognised phrases in RHS.

   • If a sequence of words W is recognised as being of a particular category C by virtue
     of an entry in the bilingual lexicon (see below), the tree will take the form [C | W1],
     where W1 is a list consisting of the words enclosed themselves in lists. For instance,
     [np(sing), [fish], [and], [chips]].

    As an example, the Prolog representation of the phrase structure tree shown in Figure 3
(laid out to show its structure) is as follows:




                            np(sing)                vp(pres,sing)

                              he         v(pres,sing)          np(sing)

                                             eats       fish    and       chips

                                Figure 3: Phrase structure tree


2.1.3    The Bilingual Lexicon
The bilingual lexicon consists of a set of facts for the predicate lex, of the form:
where SourceWords is a list of Prolog atoms (normally it will only contain one) representing a
source language word or fixed phrase, SourceCat is the category (according to your grammar)
that the word or phrase has, and TargetWords and TargetCat specify one possible translation
into the target language in the same way. If the same source language word has several possible
categories or translations, then you will need to provide multiple lex entries for it.
    Here is an example small bilingual lexicon for English-Scots:

2.1.4    Structural Transfer Rules
Structural transfer rules take the form of two Prolog terms separated by the operator ===>.
The left hand side is a Prolog list structure that is matched against the parse tree (using
standard Prolog unification) – you use variables in this for indicating the parts of the tree that
you don’t care about (for the current rule). The right hand side is a Prolog term describing

the corresponding parse tree for the target language, with one exception. A Prolog term
preceded by the operator $$ will be replaced by the result of applying the rule set recursively
to that term. This is the way the one ensures that subtrees of the pattern matched by the
rule are themselves translated correctly. Thus, for instance, the transfer rule:

      [s, X, Y] ===> [french_s, $$X, $$Y].

indicates that an English phrase of category s with two subphrases X and Y (it might have
been better to use the Prolog variables NP and VP) is to be translated into a French phrase of
category french s, also with two subphrases. The first subtree of the French parse tree is to
be filled with the translation $$X of the tree X, and the second is to be filled by the translation
$$Y of the tree Y. A rule like the following:

      [np, X, Y] ===> [french_np, $$Y, $$X].

might be used to translate English NPs into French ones, bearing in mind that French adjec-
tives appear after the noun and so some reordering of the material is necessary.
    Note that the order in which you write the structural transfer rules matters – given a
particular phrase to translate, the system will use the first rule whose left hand side matches
the parse tree. This allows you to give rules for matching special cases (e.g. the use of the
English verb “likes”) as well as having general rules (e.g. for all other English verbs) that will
cover the general case.

2.2     Example
Figure 4 shows an example of a specification for English-French translation. This can translate
a few sentences such as the following:
                    English                       French
                    The lady   likes the cheese   Le fromage plait a la dame
                    The lady   saw the cheese     La dame voyait le fromage
                    The lady   likes the saw      La scie plait a la dame
                    The lady   saw the saw        La dame voyait la scie
There are a number of points that can be made about this example:

   • The basic English and French grammars are almost the same, apart from the names of
     the categories (and of course the lexical entries). The only substantial difference is the
     introduction of a “gender” feature in the French to ensure that determiners agree with

   • Similarly the transfer rules state that almost always the French structure is a mirror of
     the English. The only difference is in the first transfer rule, which overrides the default
     when the English verb is “likes”.

   • Note that a single structural transfer rule handles both types of eng vps.

   • The English determiner “the” can translate into either a masculine or a feminine de-
     terminer in French. This is non-deterministic, and the system will try both. Only the
     one that corresponds to the gender of the French noun will result in a target language
     parse tree that is accepted by the target language grammar.


eng_s ---> eng_np, eng_vp.
eng_vp ---> eng_v, eng_np.
env_vp ---> eng_v, eng_pp.
eng_np ---> eng_det, eng_noun.
eng_pp ---> eng_p, eng_np.

fre_s ---> fre_np, fre_vp.
fre_vp ---> fre_v, fre_np.
fre_vp ---> fre_v, fre_pp.
fre_np ---> fre_det(G), fre_noun(G).
fre_pp ---> fre_p, fre_np.

lex([likes],eng_v,[plait],fre_v). % will not be used directly
lex([to],eng_p,[a],fre_p).        % ditto

[eng_s, NP1, [eng_vp, [eng_v, [likes]], NP2]]
[fre_s, $$NP2, [fre_vp, [fre_v, [plait]], [fre_pp, [fre_p, [a]], $$NP1]]].

[eng_s, NP,   VP]    ===>   [fre_s, $$NP,   $$VP].
[eng_vp, V,   Any]   ===>   [fre_vp, $$V,   $$Any].
[eng_np, D,   N]     ===>   [fre_np, $$D,   $$N].
[eng_pp, P,   NP]    ===>   [fre_pp, $$P,   $$NP].

              Figure 4: Example Specification for English-French translation

   • The English word “saw” is ambiguous between a noun and a verb. However, in this
     grammar, when a complete sentence including it is parsed, only one of the categories is
     actually possible. So the incorrect translation into French is avoided.

2.3     Grammar Development and Debugging
You are advised to keep all of your grammars and rules in a single file that you can reconsult
if you make changes. Before you reload your file, call the predicate clear to remove
any trace of the existing rules.
    As a general principle, you are advised to start with simple grammars and rules, and to
test small examples first, before you go on to more complex grammars and examples.

2.3.1    Localising Problems
If a sentence fails to translate properly, the first question to ask is which part of the system
has failed. The messages:

      **Parsed OK
      **Transferred OK
      **Generated OK

indicate how far the system is able to get with a given sentence. If some of these are repeated,
this is because of backtracking. For instance, if a given source language sentence had two
possible syntactic analyses according to the source language grammar, each of which success-
fully transferred but only the second of which could generate a legal target language sentence,
then the sequence of messages up to the first solution would be:

      **Parsed OK
      **Transferred OK
      **Parsed OK
      **Transferred OK
      **Generated OK

Multiple solutions can in general be produced in the parsing of the source sentence and in
the transfer component (via multiple lex entries, not via multiple rules, since only the first
rule is ever chosen).

2.3.2    Problems in Parsing or Generation
If an example fails to parse or generate, you can try to narrow down the search by using the
predicate parse(Category,Phrase,Tree) with progressively smaller parts of the example,
until you find the one that fails. For parsing, you make Category be the desired category of
the phrase (e.g. eng s, eng np) and Phrase be the list of words, leaving the tree Tree for the
system to instantiate. For generation, you instantiate the category and the tree, leaving the
sentence to be instantiated.
    The following predicates are helpful for debugging the grammars for the two languages:

dt(Tree) - can be used to display a tree in a format that is easier to read than a nested
     Prolog list. Try it on a simple example first.

showrules - displays all the current grammar rules and lexical entries, for both languages.
     Only any use if you have a very small system.

showrules(Category) - shows only those rules (and lexical entries) which have a category
     matching Category on the “left hand side”.

   If all else fails and you cannot easily locate a problem with parsing or generation, then
you can call the Prolog spy facility with the predicate for the distinguished symbol in your
source or target language, e.g.

   ?- spy eng_s.

and step through the tracing as the system attempts to parse or generate from this. The
grammar rules and lexical entries are, in fact, translated into standard Prolog programs in a
similar way to ordinary DCGs (though the translation also includes mechanisms for building
or examining the parse tree), and so you can see what is going on at that level.
    Please note that the parsing system will get into difficulties if you have a grammar rule
that (directly or indirectly) requires an instance of a category to appear at the far left of
another phrase of the same category, for instance as in:

   np(C) ---> det, n(C).
   det ---> np(gen).

Here an np(gen) could occur at the far left of another np(gen). You simply have to avoid
writing grammars of this kind.

2.3.3   Problems in Transfer
Again, if you have a problem with transfer then it is a good idea to try to localise it by giving
increasingly simple examples. For calling the transfer component on its own, you can use the
predicate transfer(SourceTree,TargetTree), where you provide SourceTree as the input
and leave TargetTree to be instantiated.
    The following predicates may be helpful for debugging your transfer rules:

transfer db on - switches on extra debugging output during transfer.

transfer db off - switches off this extra output.

showtrans - displays all your current transfer rules (both lex and ===>).

When the extra debugging output is switched on, the system produces messages such as the

>Trying to match [eng_s,[eng_np,[eng_det,[the]],[eng_noun,[lady]]],[eng_vp,...
 (Evaluating [fre_s,$$[eng_np,[eng_det,[the]],[eng_noun,[saw]]],[fre_vp,...
 >Trying to match [eng_np,[eng_det,[the]],[eng_noun,[saw]]]
  (Evaluating [fre_np,$$[eng_det,[the]],$$[eng_noun,[saw]]])
  >Trying lexicon for [the] as eng_det
  <Transferred to [le] as fre_det(masc)
  >Trying lexicon for [saw] as eng_noun
  <Transferred to [scie] as fre_noun(fem)
 >Trying to match [eng_np,[eng_det,[the]],[eng_noun,[lady]]]
  (Evaluating [fre_np,$$[eng_det,[the]],$$[eng_noun,[lady]]])

    >Trying lexicon   for [the] as eng_det
    <Transferred to   [le] as fre_det(masc)
    >Trying lexicon   for [lady] as eng_noun
    <Transferred to   [dame] as fre_noun(fem)

Basically, the messages step you through the transfer process - the attempt to match a parse
tree with a rule, the right hand side of the rule which must then be evaluated and the successful
transfer of items at the lexical level. Since the trees may be quite complex, it is best to use
this facility with a wide window and/or a small example.

2.4     Acknowledgements
The structural transfer system is, in fact, a fairly minor modification of code for parsing and
semantic interpretation written by Graeme Ritchie and Lex Holt.

3      Towards English-Scots Machine Translation
As the final part of looking at Machine Translation we will look at more examples of structural
transfer, as applied to the translation of English to Scots. The treatment here will be rather
sketchy, and the different examples may not always be handled consistently. The idea is to
give a flavour of the sorts of things one needs to worry about in creating an MT system.
    Although English and Scots are two quite similar languages, nevertheless there are some
non-trivial problems in translating between them.

3.1     Disclaimers
     • It is not clear that there is a single language called “Scots”, any more than there
       is a single language called “English”. With all languages, there are great regional
       variations, and Scots is no exception. Here I have picked some phenomena that seem
       to be interestingly different in Scots and English, but the set of phenomena that I will
       describe does not necessarily come all from one single dialect.

     • There is some debate about how one should write Scots (which is primarily a spoken
       language). Some people prefer to use English-like spelling where possible, whereas
       others like to spell things differently to emphasise the different pronunciation. I will
       tend towards the latter, but probably not consistently.

     • I don’t actually know a lot about the Scots language, and certainly can’t pronounce
       things in it properly.

English and Scots have been chosen here mainly as two languages that we all know or have
easy access to.

3.2     Basic English grammar fragment

      eng_s ---> eng_np(X), eng_vp(X).
      eng_vp(X) ---> eng_v(X), eng_np(_).

      env_vp(X) ---> eng_v(X), eng_pp.
      eng_np(X) ---> eng_det(X), eng_noun(X).
      eng_pp ---> eng_p, eng_np(_).

This is conventional. Note the use of variables to capture the agreement of subjects in number
with their verbs (and nouns with their determiners).

3.3     Corresponding Scots grammar fragment

      sco_s ---> sco_np(_), sco_vp.
      sco_vp ---> sco_v, sco_np(_).
      env_vp ---> sco_v, sco_pp.
      sco_np(X) ---> sco_det(X), sco_noun(X).
      sco_pp ---> sco_p, sco_np(_).

This is very similar, but note that, although Scots nouns must agree with their determiners,
subjects do not have to agree with verbs:

       the lambs is oot the field (= The lambs are out of the field)

3.4     Simple transfer rules
      [eng_s, NP,   VP] ===> [sco_s, $$NP, $$VP].
      [eng_vp, V,   Any] ===> [sco_vp, $$V, $$Any].
      [eng_np(X),   D, N]   ===> [sco_np(X), $$D, $$N].
      [eng_pp, P,   NP] ===> [sco_pp, $$P, $$NP].


Note that an English NP translates into a Scots NP with the same number. The English “is”
and “are” translate into the same Scots word.

3.5     Translation cannot be direct
The above suggests that one might be able to translate from English into Scots quite directly.
Here is one case where you can’t determine the Scots translation of a word simply from the
English. The English simple past tense verb translates differently into Scots depending on
where it occurs:
             English                          Scots
             The girl came from Edinburgh     The lassie cam frae Edinburgh
             If the girl came, she would...   Gin the lassie wid come, she wid ...

In this case, it is the context of the “if” that determines the unusual translation in the second
example; but the “if” could be any number of words to the left of the problematic verb.
    Also there must be examples of words in English which are ambiguous between verb and
noun and where the two different senses translate to different Scots words.

3.6     Revised English analysis
We can treat the simple past form of an English verb as being ambiguous between being
simply past and indicating a hypothetical situation (often called “subjunctive”, here subj).
These two different senses translate into different Scots words:


The English and Scots grammars now have an extra argument for verbs indicating their type,
one of past, subj or would (note that the latter corresponds to a two-word phrase). We will
also add pres (present) to the list.
    We require that the English grammar appropriately disambiguate the past tense verb
occurring in a context and hence force the correct translation.

3.7     Revised English sentences and VPs
The following revised English rules specify that a sentence can be either present or past
tense. One particular form of the present tense sentence is a sentence, consisting of “if”, a
subjunctive sentence and then a “would”sentence (as in “if he arrived I would know”).

      eng_s ---> eng_s(pres).
      eng_s ---> eng_s(past).

      eng_s(Vform) ---> eng_np(X), eng_vp(Vform,X).
      eng_s(pres) ---> [if], eng_s(subj), eng_s(would).

      eng_vp(Vform,X) ---> eng_v(Vform,X), eng_np(_).

These grammar rules would then serve to disambiguate those problematic forms.
   The transfer rules would again need to take English sentences to similarly-shaped Scots
ones, with the following addition:

      [eng_s(pres),[if],X,Y] ===> [sco_s(pres),[gin],$$X,$$Y].

Notice that it is not necessary to say anything at this level about the translation of the
problem verb – that will be achieved via the disambiguation of the English grammar and the
appropriate lex entry.

3.8     Word order differences
Here are some other interesting differences between the languages:

               English                             Scots
               The clothes need to be boiled       The claes needs bylt
               I remember who my brother was       I mind who wiz ma brither
The first of these could probably be handled by VP rules like:
      eng_vp(Vform,X) ---> eng_v(Vform,X), [to,be], eng_v(pastp,_).
      sco_vp(Vform) ---> sco_v(Vform), sco_v(pastp,_).
where pastp indicates the past participle form of a verb, together with a transfer rule:
      [eng_vp(F,_),V1,[to,be],V2] ===> [sco_vp(F),$$V1,$$V2].
The second introduces an interesting change of word order, where the English verb “was”
could have to “move” an arbitrary distance in order to come before the NP in the Scots.
Here is a rather rough-and-ready way of dealing with this phenomenon, though it is not very
general. First, we introduce simple “indirect questions” into the grammars:
      eng_vp(Vform,X) ---> eng_think_v(Vform,X), eng_indir_q.
      eng_indir_q ---> eng_qpron, eng_np(X), eng_be_v(pres,X).

      scot_vp(Vform) ---> sco_think_v(Vform), sco_indir_q.
      scot_indir_q ---> sco_qpron, sco_be_v(pres), sco_np.

Then a transfer rule notes the change in order that must be introduced:
      [indir_q,X,Y,Z] ===> [scot_indir_q,$$X,$$Z,$$Y].

3.9     Other Phenomena
Some other issues that arise in translating English to Scots are:
Word sense ambiguity. For instance, the English “wall” could translate into “waw” or
    “dyke”, depending on whether it is a wall of a house or not.
Relative clauses. For instance, the English “the spikes that you stick in the ground are
     sharp” would translate into the equivalent of “the spikes that you stick them in the
     ground are sharp.”
Prepositions. For instance, the English “by” translates into “frae” in passive constructions
    (“I’m going to be killed by my mother”) but into words like “at”, “aside” or “anent”
    when it specifies a location (“she lives by the river”).

3.10     Further Reading
   • Jim Miller, “The Grammar of Scottish English”, in J. Milroy and L. Milroy (Eds), Real
     English: The Grammar of English Dialects in the British Isles, Longman, 1993.
   • CAL/Scots/hame.htm