# Review of Last Class

Document Sample

```					         Theory of Computation -- Intro Lecture II; Spring 2008

Review of Last Class

Before we consider some specific examples of TM’s, let review some of the important
concepts from last class.

1)     Let M be a TM. Then, the Language Recognized by M, denoted L(M), is the set of all
strings that M “accepts”. This refers to strings that, after being processed, take M to the
state qaccept, which is a halting state.
a)       Note that this definition speaks of “acceptance” only. It does not say anything
about strings that are not in L(M).
2)     A TM is said to recognize a language L if L = L(M). There is, however, no specification
of what happens when the input is some string xL. In this case, M may reject the string
or loop forever.
a)       Definition 3.5:          If a language can be recognized by a TM, it is called
“Turing Recognizable”.
3)     A TM is said to decide a language L if L = L(M), and for all strings xL, x will take M to
the qreject state. That is, M never loops. It either answers “accept” or “reject” for all input
strings. It therefore “decides” the language.
a)       Definition 3.6:          If a language can be decided by a TM, it is called “Turing
Decidable”.
4)     NOTE:             all Turing-Decidable languages are clearly Turing-Recognizable (but the
opposite implication is not true).
5)     Don’t forget that other authors use different terminology for the same ideas.
6)     Also, review the definitions for configuration, yield, and the definitions for start, reject,
and halting configurations.

Examples of Turing Machines
Example 3.7

Let’s consider some specific examples of TM’s. We’ll start with the example from the
text on page 143.
We wish to construct a TM M2 that decides the language A = {02^n | n  0}. This
language consists of all strings of 0’s whose length is a power of 2.

**Notes about the alternative solution presented in class.... Well done! **

I believe that the suggestion made in class to solve this problem is correct, although it
may require some more overhead characters. Note, however, that in our current context we are
not concerning ourselves with issues of efficiency, size of program, or even run time complexity.
We are simply concerned with whether or not it can be done. If a language can be recognized or
decided by a TM, then we have learned something valuable about that language regardless of
Theory of Computation -- Intro Lecture II; Spring 2008                                         Page: 2

how efficient the machine is. Therefore, either solution, i.e. the one presented in class or
Sipser’s, is sufficient to show that the language A is Turing Decidable.

A verbal description for this machine is given on page 143. I won’t repeat it here.
However, in my view, there are a couple of problems with the description given in the text:

1)      First, it is confusing. Although it is not clear from the way it is written, the tests in steps
2 and 3 must be applied before Step 1 is applied. If Steps 2 and 3 are applied after Step
1, the machine can’t distinguish between 08 and 09.
a)       notice the past tense word “contained”, i.e. past tense, which seems to imply that
Sipser was aware of this issue. However, then why write Steps 2 and 3 after Step
1?
2)      Second, this description does not correspond to the state diagram given on the next page.

The formal description of M2 is given Sipser p. 143

M2 = (Q, , , , q1, qaccept, qreject), where:
1)     Q = {q1,q2,q3,q4,q5,qaccept,qreject}
2)      = {0}
3)      = {0, x,  }
4)      is described with a state diagram
5)     Start, accept, and reject states are q1, qaccept, qreject

0L
XL

xR                  q5

L
R

q1                   q2                        q3           xR
0,                  x,R
R                 R
xR                              R              0R         0X,
R

qreject              qaccept                   q4
xR

R
Theory of Computation -- Intro Lecture II; Spring 2008                                       Page: 3

The transitions are explained by:
1)     the character to the left of the arrow is the character that is currently being scanned by the
read/write head (i.e. before the application of the transition). Note that this can include
anything in the tape alphabet (not just the input alphabet), including spaces.
2)     The RHS of the arrow will always contain either an “R” or and “L”, indicating whether
3)     If the RHS of the arrow contains some other character, then that character is written to
the tape prior to the move of the read/write head. If there is no other character, then the

Go through the sample run for 0000 (p. 132)

In my view, a better description for M2, one that describes what the machine actually
does, is the following:

A better description for M2:
1)     Mark the first 0 with space (    )
2)     Sweep left to right, crossing off every other 0
a)       Note that the first 0 in sequence is crossed off in state q2. The alternating 0’s are
crossed off in states q3 and q4.
3)     If all 0’s were crossed off, accept
a)       this will be detected when the tape head is moved to the right all the way through
the x’s and encounters a without having encountered a 0 first. The FSC will be
in state q2 if this happens.
4)     If the last 0 in the sequence didn’t get x’ed, reject
a)       this will be detected when there is only one extra 0 encountered in state q3. The
machine goes to q4 but never makes it back to q3 before a blank is encountered.
When it finally encounters the blank symbol, it will go to the reject state.
5)     go to 2

Example 3.9, page 145
Let’s construct a TM that decides a language we’ve already looked at, namely B = {w#w
| w{0,1}*}

We’ll go through the design of this machine in class. Incidentally, note the typo in the
state set, Q. The largest numbered state should be q8 instead of q14.

Example 3.11, page 146
As this example shows, it is not hard to create a TM that does elementary arithmetic.
Let’s create a TM to decide the language C = {aibjck | i*j = k and i,j,k  1}.
Theory of Computation -- Intro Lecture II; Spring 2008                                     Page: 4

Again, let’s do this one in class.

Example 3.12, page 147
Let’s create a TM that solves the element distinctness problem.
Let E = {#x1#x2# ... #xl | each xi  {0,1}* and xi  xj for ij }

This machine works by comparing x1 with x2 through xl. If any of these strings are equal,
it rejects. If not, it compares x2 with x3 through xl, etc. Only after all comparisons have been
made and no matches found, does it accept.

Notice the technique used here of marking symbols on the tape.

Variants of Turing Machines (p. 148)

Multitape Turning Machines
A multitape TM is like an ordinary TM except that it has multiple tapes. Each tape has
its own read/write head. Initially the input appears on tape 1, and the others are blank. (Sipser,
148)
Note that the transition function must be modified to allow for reading, writing, or
moving the heads on some or all of the tapes simultaneously. (Sipser, 148) The new transition
function is:

: Q x k  Q x k x {L,R,S}k

where “k” is the number of tapes.
So, a specific (but general) example of a transition function might look like:

(qi, a1, ..., ak) = (qj, b1, ..., bk, L, R, ..., L)

where there are k of the L’s and R’s.

Although multitape TM’s appear to be more powerful than single tape TM’s, they are
not, as is shown by the following theorem. In fact, the two types of TM’s are equal in their
computational power. In order to show this, we must show that every single tape machine can be
simulated by a multitape machine and vice versa.
Showing the first implication, i.e. that every single tape machine can be simulated by a
multitape machine, is easy, since every single tape machine is a multitape machine that just
happens to have one tape. If one wanted, one could create some dummy tapes on the simulating
machine that do nothing, and simulate the action of the single tape machine on that multitape
machine by using only one tape. This is clearly trivial.
Theory of Computation -- Intro Lecture II; Spring 2008                                    Page: 5

The most difficult direction, therefore, is the opposite one: showing that every multitape
machine can be simulated by a single tape machine. That is the result of the next theorem.

THEOREM 3.13 (page 149)
Every multitape TM has an equivalent single tape TM.

Proof:
To prove this, we must show that, given a multitape machine M, say with k tapes, we can
construct a single tape machine S that does exactly the same thing. The two main issues are:

1)       how to keep track of the contents of the k tapes of M
2)       how to keep track of the positions of the k read heads for the k tapes of M

We will develop this in class.

Corollary 3.15 (p. 150)
A language is Turing-Recognizable iff some multitape TM recognizes it.

Proof:
“” If a language is Turing-Recognizable, it is recognized by a single tape TM. Since
this is a special case of a multitape machine, it is also recognized by a multitape machine (that
just happens to have one tape), and this direction is proven.

“” If a language is recognized by a multitape machine it is recognized by a single
tape machine (Theorem 3.13). But any language recognized by a single tape machine is Turing-
Recognizable by Definition 3.5

Non-Deterministic Turing Machines (p. 150)

Non-determinism in TM’s is similar to non-determinism we’ve encountered with FA’s
and PDA’s: at any point in the computation the TM may proceed down one of several
computational pathways. The collection of these computational paths can be viewed as a tree
structure, where each node consists of a configuration that is encountered down a specific path,
and the root node is the starting configuration. The non-deterministic TM will explore each of
these pathways, and if even a single branch of the computation leads to an accept state, the
machine will accept the input.
The transition function for a non-deterministic TM has the form:

: Q x   P( Q x  x {L,R} )

This accounts for the fact that for any combination of state and tape symbol currently being read,
the machine may have multiple possibilities.
Theory of Computation -- Intro Lecture II; Spring 2008                  Page: 6

By using a multitape TM, we can show the following theorem:

Theorem 3.16
Every non-deterministic TM has an equivalent deterministic TM.

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 77 posted: 8/28/2009 language: English pages: 6