Some further examples

Doing reductions is a skill like any other - practice helps. Before you dive into the problems for chapter five you will find it helpful to work through a few more examples. In these notes I will show some additional examples of mapping reductions and then walk you through the solution of three more problems from the chapter five exercises.

HALTTM is undecidable

Here we show that ATMm HALTTM.

For our first attempt we try the trivial mapping f(<M,w>) = <M,w>. This fails because there may well be an example where M does not accept some w, but M still halts on w. This is an example where <M,w> is not in ATM but <M,w> is in HALTTM.

A mapping that works is to map <M,w> to <N,w> where N is the following machine:

On input x:

  1. Simulate x running on M.
  2. If M accepts x, stop and accept.
  3. If M rejects x, run forever.

This mapping works, because if <M,w> is in ATM N will halt on w, and if <M,w> is not in ATM then N will either run forever in step 1 or run forever in step 3: in both cases <N,w> is not in HALTTM.

ETM is undecidable

Here we show that ATMm ETM.

For our first attempt we try the trivial mapping f(<M,w>) = <M>. This fails because if <M,w> is in ATM then M accepts at least one input, and thus M can not possibly be in ETM.

This first, failed attempt suggests that what we should actually do is to try to show that ATMm ETM.

The naive mapping f(<M,w>) = <M> also fails in this case, because if <M,w> is not in ATM there may still be some other y such that M accepts y, which causes M not be in ETM.

A mapping that works is to say that f(<M,w>) = <N>, where N is the following machine:

On input x:

  1. If xw stop and reject.
  2. Simulate M running on x and accept if M does.

If we can use this mapping to show that ETM is undecidable, then it immediately follows that ETM is not decidable.

Problem Four

If Am B and B is a regular language, does it follow that A is regular?

No: Let A be any Turing decidable, non-regular language, and let a be any representative of B and b any representative of B. Here is a computable mapping from A to B:

On input x:

  1. Run the decider for A on x.
  2. If the decider accepts, output a.
  3. If the decider rejects, output b.

An interesting thing to note in this answer is that mappings from A to B don't have to be even remotely onto. We are under no obligation at all to map to every element of B - in this answer I didn't even try to do that.

Problem Fifteen

Consider the problem of determining whether or not a Turing machine M on an input w ever attempts to move its tape head to the left at any point in its computation on w. Formulate this as a language and show that it is decidable.

L = {<M,w> | M moves its tape head left while operating on the input w }

This is an interesting problem, because chapter five is mostly about proving that everything in sight is undecidable. This is not too far from being true, because just about every problem involving the general behavior of Turing machines is undecidable. To prove that some language involving the behavior of Turing machines is decidable, we have to look for something special or unusual in the problem at hand.

Here we take advantage of the fact that if a Turing machine is never going to move its tape head to the left it will run only so long before it runs past the end of the input string. This puts the Turing machine in a special situation that we can take advantage of.

Here is what the decider will do:

On input <M.w>:

  1. Simulate M running on w for |w| steps.
  2. If the simulation moved the tape head to the left at any point, stop and accept.
  3. If the simulation has reached a halting state, stop and reject.
  4. Note the state the machine was in when we stopped the simulation. Construct a new control for M that has the same states as before but only retains transitions that attempt to read a blank. Also, mark all states that have a transition coming from them that reads a blank and moves left.
  5. Run the breadth-first search algorithm on the modified control from the state the machine stopped at. If the search is able to reach one of the marked states, stop and accept. Otherwise, stop and reject.

The key to this solution is that once the tape head runs past the end of the input we are operating in a simplified environment. In this state, the machine will only ever encounter blanks until it decides to move left. This observation allows us to greatly simplify the control and essentially turn the control into a simple directed graph. Doing the search in such a graph for a state that would allow us to move left is simple enough that BFS can handle it.

About recognizers

So far we have dealt mostly with deciders for languages. As you know, we have another alternative available to us, the recognizer. A valid recognizer for a language only has to accept every input that is in the language the machine recognizes. When we feed a recognizer an input that is not in the language, it can either stop and reject or it can run forever. This is a convenient loophole that makes it generally easier to build recognizers than deciders.

ATM is undecidable, which means we can't build a decider for it. We can, however, easily build a recognizer for it:

On input <M,w>:

  1. Simulate w running on M until it stops.
  2. If M accepted w, stop and accept.
  3. If M rejected w, stop and reject.

This is a valid recognizer for ATM, because if we feed it an input in ATM it will accept. If we feed it an input and step 1 runs forever, our machine has implicitly rejected the input, which is what it should do.

Here also is the recognizer version of a key theorem I showed last time:

Theorem If A ≤m B and B is Turing recognizable, then A is Turing recognizable.

Proof Here is a recognizer for A:

On input x, where is a potential member of A:

  1. Compute f(x)
  2. Run the recognizer for B on f(x)
  3. Since A ≤m B, we have that f(x) ∈ B if and only if xA. If the recognizer for B accepted f(x), it means that f(x) ∈ B, and thus xA. We stop and accept.
  4. Likewise, if the recognizer for B rejects f(x) it means that f(x) is not in B and thus x is not in A. We stop and reject.

This machine allows for one other possibility: step 2 could run forever. In that case, the recognizer for B is saying implicitly that f(x) is not in B, which means that x is not in A. Running forever on an input that is not in A is something that a recognizer for A is allowed to do.

Finally, the depressing truth is that there exist languages that are not even Turing recognizable. One example of this is the language EQTM. Neither that language nor its complement is Turing recognizable. The proof of this is to do a reduction from ATM to EQTM. Since the complement of ATM is not Turing recognizable, the complement of the complement of EQTM (EQTM itself) is not Turing recognizable.

Here is the machine that implements the computable f(x) that reduces ATM to EQTM.

On input <M,w>:

  1. Construct the machine M1 that rejects all of its inputs.
  2. Construct a machine M2 that ignores its input, simulates w running on M, and does what M does.
  3. Return the pair <M1 , M2>.

Problem Twenty-two

Show that A is Turing-recognizable if and only if Am ATM.

If Am ATM then theorem 5.28 allows us to conclude that A is Turing recognizable.

If A is a Turing recognizable language we use its recognizer R to construct the following mapping from A to ATM:

On input x:

  1. Return the pair <R,x>

The mapping is trivial, and clearly computable.

If x is in A, R will accept x and the pair <R,x> lands in ATM.

If x is not in A, R will not accept x and the pair <R,x> lands in the complement of ATM.