Week 8: Turing Machines

In [1]:
from tock import *
from IPython.display import IFrame, HTML

Tuesday

We have been climbing up the Chomsky hierarchy of language classes (cf. Figure 4.10):

Unit Language class Automaton Grammar
III Turing-recognizable languages Turing machines Unrestricted grammars
Context-sensitive languages Linear bounded automata Context-sensitive grammars
II Context-free languages Pushdown automata Context-free grammars
I Regular languages Finite automata Regular expressions

(Cells in italics are not core topics in this course, though we will mention them at some point.)

Now we begin Unit III of the course and finally introduce Turing machines (TMs), which sit at the top of the hierarchy, because, as we will discuss at the end of today's lecture, we think Turing machines define what it is possible to compute.

Definition of Turing machines

Read both the informal and formal definitions of TMs (165–170). For now, you can skim or skip over the definition of configuration (page 168 at "As a Turing machine computes" through the end of page 169), though we will certainly need it later.

Watch W8E1: Turing Machines.

There are many variations out there in the definition of TMs. Be careful if you refer to other sources.

The parts of a TM:

image.png

The start configuration:

image.png

A transition: image.png

The formal definition of TM implies that it is deterministic: For any state $q$ and symbol $a$, there must be exactly one transition from $q$ on symbol $a$.

If the machine:

  • is in state $q$
  • the current symbol is $a$

then the machine will:

  • write symbol $b$
  • move left (L)
  • go to state $r$.

More about moves:

  • If the head is already on the leftmost cell and a transition says to move left, the head remains on the leftmost cell.
  • The move can be L for "move left" or R for "move right". We also allow S for "stay". (My notes may also have N for "no move", which means exactly the same thing.)

There are three possible outcomes of a TM run:

  • If the machine enters the accept state, it halts and accepts the string.
  • If the machine enters the reject state, it halts and rejects the string.
  • If the machine runs forever, we say that it loops.

Looping is something we've never had to worry about before. Deterministic FAs and PDAs can't loop; nondeterministic FAs and PDAs can loop (because of cycles of $\varepsilon$-transitions), but it's always possible to change such machines into an equivalent ones that do not loop. But with Turing machines, the possibility of looping is real and very important.

Consequently, there is an important distinction between:

  • A language is Turing-recognizable (more commonly called "recursively enumerable" or "computably enumerable") if there is a TM that accepts all strings in the language and rejects or loops on all strings not in the language.
  • A language is (Turing-)decidable (also called "recursive" or "computable") if there is a TM that accepts all strings in the language and rejects all strings not in the language.

People have built various physical Turing machines, including TMs made out of Lego Mindstorms, Lego and air tubes, wood, and Redstone.

In [2]:
IFrame(width=560, height=315, src="https://www.youtube.com/embed/E3keLeMwfHY")
Out[2]:
In [3]:
IFrame(width=560, height=315, src="https://www.youtube.com/embed/vo8izCKHiF0")
Out[3]:
In [4]:
IFrame(width=560, height=315, src="https://www.youtube.com/embed/KrNTmOSVW-U")
Out[4]:

Examples: formal descriptions

Read the book's examples of TMs (170–175).

Watch W8E2: Turing Machine Examples.

Here are the book's two examples of formal descriptions. (Note that if you are Tock to write Turing machines, Tock's notation is different from Sipser's; see the Tock documentation.)

Machine $M_2$ recognizes the language $\{\texttt{0}^{2^n} \mid n \geq 0\}$.

In [5]:
m2 = read_csv("tm-m2.csv")
to_graph(m2)
q1 q2 0 → ␣,R reject x → x,R ␣ → ␣,R accept q5 0 → 0,L x → x,L ␣ → ␣,R ␣ → ␣,R x → x,R q3 0 → x,R ␣ → ␣,L x → x,R q4 0 → 0,R ␣ → ␣,R 0 → x,R x → x,R

Here's an example run; try a few different input strings to see how the run changes.

In [6]:
run(m2, "0 0").only_path()
Out[6]:
q1[0] 0
q2␣ [0]
q3␣ x [␣]
q5␣ [x] ␣
q5[␣] x ␣
q2␣ [x] ␣
q2␣ x [␣]
accept␣ x ␣ [␣]

accept

Machine $M_1$ recognizes the language $\{w\#w \mid w \in \{\texttt{0}, \texttt{1}\}^\ast\}$.

In [7]:
m1 = read_csv("tm-m1.csv")
m1
q1 accept q2 0 → 0,R 1 → 1,R q4 # → #,R q3 0 → 0,R 1 → 1,R q5 # → #,R q8 ␣ → ␣,R x → x,R x → x,R q6 0 → x,L x → x,R 1 → x,L 0 → 0,L 1 → 1,L x → x,L q7 # → #,L 0 → 0,L 1 → 1,L x → x,R 0 → x,R 1 → x,R # → #,R

As explained at the bottom of page 173, the reject state is implicit in this example. You can do this when you write formal descriptions of TMs also.

In [8]:
run(m1, "0 1 1 0 0 0 # 0 1 1 0 0 0").only_path()
Out[8]:
q1[0] 1 1 0 0 0 # 0 1 1 0 0 0
q2x [1] 1 0 0 0 # 0 1 1 0 0 0
q2x 1 [1] 0 0 0 # 0 1 1 0 0 0
q2x 1 1 [0] 0 0 # 0 1 1 0 0 0
q2x 1 1 0 [0] 0 # 0 1 1 0 0 0
q2x 1 1 0 0 [0] # 0 1 1 0 0 0
q2x 1 1 0 0 0 [#] 0 1 1 0 0 0
q4x 1 1 0 0 0 # [0] 1 1 0 0 0
q6x 1 1 0 0 0 [#] x 1 1 0 0 0
q7x 1 1 0 0 [0] # x 1 1 0 0 0
q7x 1 1 0 [0] 0 # x 1 1 0 0 0
q7x 1 1 [0] 0 0 # x 1 1 0 0 0
q7x 1 [1] 0 0 0 # x 1 1 0 0 0
q7x [1] 1 0 0 0 # x 1 1 0 0 0
q7[x] 1 1 0 0 0 # x 1 1 0 0 0
q1x [1] 1 0 0 0 # x 1 1 0 0 0
q3x x [1] 0 0 0 # x 1 1 0 0 0
q3x x 1 [0] 0 0 # x 1 1 0 0 0
q3x x 1 0 [0] 0 # x 1 1 0 0 0
q3x x 1 0 0 [0] # x 1 1 0 0 0
q3x x 1 0 0 0 [#] x 1 1 0 0 0
q5x x 1 0 0 0 # [x] 1 1 0 0 0
q5x x 1 0 0 0 # x [1] 1 0 0 0
q6x x 1 0 0 0 # [x] x 1 0 0 0
q6x x 1 0 0 0 [#] x x 1 0 0 0
q7x x 1 0 0 [0] # x x 1 0 0 0
q7x x 1 0 [0] 0 # x x 1 0 0 0
q7x x 1 [0] 0 0 # x x 1 0 0 0
q7x x [1] 0 0 0 # x x 1 0 0 0
q7x [x] 1 0 0 0 # x x 1 0 0 0
q1x x [1] 0 0 0 # x x 1 0 0 0
q3x x x [0] 0 0 # x x 1 0 0 0
q3x x x 0 [0] 0 # x x 1 0 0 0
q3x x x 0 0 [0] # x x 1 0 0 0
q3x x x 0 0 0 [#] x x 1 0 0 0
q5x x x 0 0 0 # [x] x 1 0 0 0
q5x x x 0 0 0 # x [x] 1 0 0 0
q5x x x 0 0 0 # x x [1] 0 0 0
q6x x x 0 0 0 # x [x] x 0 0 0
q6x x x 0 0 0 # [x] x x 0 0 0
q6x x x 0 0 0 [#] x x x 0 0 0
q7x x x 0 0 [0] # x x x 0 0 0
q7x x x 0 [0] 0 # x x x 0 0 0
q7x x x [0] 0 0 # x x x 0 0 0
q7x x [x] 0 0 0 # x x x 0 0 0
q1x x x [0] 0 0 # x x x 0 0 0
q2x x x x [0] 0 # x x x 0 0 0
q2x x x x 0 [0] # x x x 0 0 0
q2x x x x 0 0 [#] x x x 0 0 0
q4x x x x 0 0 # [x] x x 0 0 0
q4x x x x 0 0 # x [x] x 0 0 0
q4x x x x 0 0 # x x [x] 0 0 0
q4x x x x 0 0 # x x x [0] 0 0
q6x x x x 0 0 # x x [x] x 0 0
q6x x x x 0 0 # x [x] x x 0 0
q6x x x x 0 0 # [x] x x x 0 0
q6x x x x 0 0 [#] x x x x 0 0
q7x x x x 0 [0] # x x x x 0 0
q7x x x x [0] 0 # x x x x 0 0
q7x x x [x] 0 0 # x x x x 0 0
q1x x x x [0] 0 # x x x x 0 0
q2x x x x x [0] # x x x x 0 0
q2x x x x x 0 [#] x x x x 0 0
q4x x x x x 0 # [x] x x x 0 0
q4x x x x x 0 # x [x] x x 0 0
q4x x x x x 0 # x x [x] x 0 0
q4x x x x x 0 # x x x [x] 0 0
q4x x x x x 0 # x x x x [0] 0
q6x x x x x 0 # x x x [x] x 0
q6x x x x x 0 # x x [x] x x 0
q6x x x x x 0 # x [x] x x x 0
q6x x x x x 0 # [x] x x x x 0
q6x x x x x 0 [#] x x x x x 0
q7x x x x x [0] # x x x x x 0
q7x x x x [x] 0 # x x x x x 0
q1x x x x x [0] # x x x x x 0
q2x x x x x x [#] x x x x x 0
q4x x x x x x # [x] x x x x 0
q4x x x x x x # x [x] x x x 0
q4x x x x x x # x x [x] x x 0
q4x x x x x x # x x x [x] x 0
q4x x x x x x # x x x x [x] 0
q4x x x x x x # x x x x x [0]
q6x x x x x x # x x x x [x] x
q6x x x x x x # x x x [x] x x
q6x x x x x x # x x [x] x x x
q6x x x x x x # x [x] x x x x
q6x x x x x x # [x] x x x x x
q6x x x x x x [#] x x x x x x
q7x x x x x [x] # x x x x x x
q1x x x x x x [#] x x x x x x
q8x x x x x x # [x] x x x x x
q8x x x x x x # x [x] x x x x
q8x x x x x x # x x [x] x x x
q8x x x x x x # x x x [x] x x
q8x x x x x x # x x x x [x] x
q8x x x x x x # x x x x x [x]
q8x x x x x x # x x x x x x [␣]
acceptx x x x x x # x x x x x x ␣ [␣]

accept

Writing Turing machines

Read the subsection "Terminology for Describing Turing Machines" (184–185).

Writing Turing machines is extremely tedious -- much more so than assembly language. Although some authors introduce a mechanism for defining "macros", Sipser takes a totally different approach, proposing three ways of defining Turing machines.

  • You've seen formal descriptions. When you are asked for a formal description, either a table or a state diagram is fine.

  • You've also seen pseudocode-like descriptions of Turing machines; Sipser calls these implementation-level descriptions. These abstract away from states and transitions, but they still speak in terms of the head moving on the tape and reading and writing symbols on the tape. They shouldn't, for example, make use of variables or arithmetic. See the examples in the book to get an idea of what is allowed and what isn't.

  • High-level descriptions will come later and abstract even away from the tape.

When we ask you to write a TM, we'll always specify what kind of description we're looking for.

Thursday

The Church-Turing thesis

Read the rest of Section 3.3 (182–184).

Watch W8E3: The Church-Turing Thesis.

The Church-Turing thesis (CTT) is also often simply called "Church's thesis" but the name "Church-Turing thesis" acknowledges Turing's key role in developing the thesis.

Sipser's concise statement of the thesis is "Intuitive notion of algorithms equals Turing machine algorithms." It is a thesis (or possibly, a definition), not a theorem. Rather than prove it, we can only give arguments in favor of it.

The first justification comes from Turing's original paper (1936), where proposed Turing machines as a model of what humans do when they compute. He imagined a computing person in an idealized scenario:

  • He has an infinitely long paper tape, divided into squares.
  • He can write one symbol in each square, and the number of possible symbols is finite (e.g., 0 to 9).
  • He can only look at a finite number of squares at a time.
  • He can only move a finite distance at a time.
  • He has only a finite number of “states of mind.”

Basically this is an appeal to intuition that when people compute, this is what they do. Furthermore, he proved (quite briefly) that Turing machines can perform any such computation.

In addition to Turing's original justification, another justification for the CTT, the one more commonly cited today, is that many other proposals have been made for models of computability, and for the most part they have all turned out to be equivalent to Turing machines. The end of Section 3.2 alludes to some of these. The most important ones are probably:

Accidentally Turing Complete is a collection of other weird things that are equivalent to TMs.

An important subcase of this argument is that no matter how we try to augment Turing machines to make them more powerful, it seems to always turn out to be equivalent to standard Turing machines. This is the topic of the next class.

This website does not host notebooks, it only renders notebooks available on other websites.

Delivered by Fastly, Rendered by OVHcloud

nbviewer GitHub repository.

nbviewer version: 90c61cc

nbconvert version: 5.6.1

Rendered (Tue, 26 Oct 2021 20:11:07 UTC)