Notes written by Ben Wood, Fall 2015

The notion of what can (and cannot!) be computed is studied in depth in CS 235. CS 251 is a great place to see its profound impact on how we do computing in practice and what features are possible (or impossible) in programming languages. While CS 251 will not be heavy on proofs, we will repeatedly return to consider the practical impact of this idea through the course.

Contents

Computable and Uncomputable Functions

You are familiar with many problems (or functions) that are decidable (or computable), meaning there exists some algorithm that computes an answer (or output) to any instance of the problem (or for any input to the function) in a finite number of simple steps.

A simple example is the integer increment operation:

f(x) = x + 1

It should be intuitive that, given any integer, x, we can compute x + 1 in a finite number of steps. Since x is finite, it may be represented by a finite string of digits. Using the carrying addition algorithm you learned in elementary school, we can clearly compute another string of digits representing the integer equivalent to x + 1.

Yet there are also problems and functions that that are undecidable or uncomputable, meaning that there exists no algorithm that can compute an answer or output for all inputs in a finite number of simple steps. (Undecidable simply means vuncomputable in the context of a decision problem, whose answer (or output) is either “yes” or “no”.)

Models for Computability and Turing-Completeness

Alonzo Church and student Alan Turing proved the equivalance of three fundamental models of computability in the 1930’s. These equivalent models are:

  • partial recursive functions, a definition of computable functions using recursion;
  • Turing machines a simple abstract machine based on a finite state machine combined with an infinite tape and a device for reading or writing a symbol on the tape;
  • the lambda calculus, essentially a simple programming language definition.

Church and Turing proved that a function is computable by one of these models if and only if it is computable by all three.

Informally, the widely accepted Church-Turing Thesis states that these models also capture computability exactly – that a function is “computable in principle” or “effectively computable” (discounting limits on physical resources) if and only if it is computable on these models. Since there is no formal definition of “computable in principle,” this thesis cannot be proven formally.

General-purpose programming languages are Turing-complete, meaning that they are also equivalent to these three models. They can be reduced (with great effort) to any of these three models. For example:

  • A program (or an interpreter for a programming language) can be encoded as a Turing Machine.
  • A program can be translated to the lambda calculus, a simple programming language at heart.
  • A program can be characterized as a partially recursive function, a function that is not necessarily defined on all inputs and that may use recursion in its definition.

More importantly, if a programming language is Turing-complete, every computable function is expressible as a program in that programming language. In other words, there is no computable function you cannot write as a program in a Turing-complete language.

While some of these correspondences may become clearer in this course, especially as we consider functional programming languages and techniques for programming language implementation, it is not a focus of the course. We are less concerned with the nature of this correspondence itself than with its implications.

Two Implications of Computability for Programming Languages

Expressiveness and Power

Programmers and the programming languages community often use the words “power” and “expressiveness” (sometimes together as “expressive power”) when describing or comparing language features. I will use these words during the semester. It is important to remember what these words do and do not mean.

In the context of Turing-complete programming languages, declaring language or feature A to be “more expressive” than B does not mean that A allows you to express more computable functions than B. Any function that can be computed by a program in one Turing-complete programming language can be computed by a program in every other Turing-complete programming language! So why do we have more than one programming language if they are all equivalently powerful in the computability sense? And what do well-trained computer scientists mean when they discuss “expressive power” of programming languages?

The answer, which we will grow to understand more deeply this semester, is that, in the design of programming languages we typically care very much more about how a computation is expressed than about whether it can be expressed. Different sorts of programming languages are well-suited to different sorts of problems (and poorly suited to others). “Expressiveness” and “power” in this context usually describe the ease or elegance with which a particular programming language expresses a particular computation. After all, would you like to build a web browser or Facebook using machine language (the low-level instructions executed by computer hardware)?

The Halting Problem and (Not) Deciding Properties of Programs

We have claimed that uncomputable functions exist. We will consider the halting problem to demonstrate this claim and to show that, in fact, many interesting questions about programs are undecidable. While we will not refer to the proof of the halting problem’s undecidability in the rest of the course, understanding it is very useful for a computer scientists. We will revisit the implications of the halting problem a number of times later in the course.

Halting Problem

The halting problem asks whether a given program P will halt (i.e., complete execution and return a result after a finite number of steps) when run on a given input, x.

The halting problem is undecidable.

Proof

The proof is by contradiction, using a technique called diagonalization (not discussed here, but hopefully familiar from math). We sketch the reasoning here.

Suppose we have a function Halt(P,x) (written in your favorite programming language) that solves the halting problem. Halt(P,x) itself halts on all inputs and returns true if running P(x) will halt and false if it will not.

Define Sly(P) as the following program:

  • Run Halt(P,P). This will always halt and return a result.

  • If the result is true, loop forever, otherwise halt.

As defined, Sly(P) will run forever if P(P) would halt and Sly(P) will halt if P(P) would run forever. (Note that we are not running P(P), just asking what it would do if run.)

Now, let’s run Sly(Sly).

  • It first runs Halt(Sly,Sly), which halts and returns a result.

  • If the result is true, it now loops forever, otherwise it halts.

In other words:

  • If Sly(Sly) halts, Halt(Sly,Sly) told us that Sly(Sly) would run forever.

  • If Sly(Sly) runs forever, Halt(Sly,Sly) told us that Sly(Sly) would halt.

This is a contradiction!

Implications and Applications

Most interesting questions about programs are undecidable.

  • Will this Java program ever throw a NullPointerException?
  • Will this program ever access a given object again?
  • Will this program ever send sensitive information over the network?
  • Will this program divide by 0?
  • Will this program ever run out of memory, starting with a given amount available?
  • Will this program ever try to treat an integer as an array?

Proving Undecidability via Reduction

Updated 6:00pm Thursday 3 September with more description.

To prove a problem P is undecidable, we reduce a known undecidable problem Q to it. Proof by reduction entails:

  • Assume there is a program or algorithm DecideP that decides the problem P.
  • Next, show that we can use DecideP to decide Q, by translating an instance of the Q problem to an instance of the P problem and then applying DecideP. The translation must itself halt.
  • This yields a contradiction, since Q is known to be undecidable.

Q is typically the halting problem.

Example: HaltAny(Q), which determines if a program Q halts for at least one input, is undecidable.

  • Suppose that HaltAny(Q) always halts.

  • Implement Halt(P,x) as a program that uses HaltAny.

    Here is our implementation of Halt(P,x):

    • Given inputs P and x, build a new program R that ignores its input and then runs P(x).

      Think of this as manipulating the string that is the source code of program P and the string representing input x to produce a new string, the source code of program R. You should be convinced that it is possible to make such a transformation in finitely many steps.

    • Now, apply HaltAny(R).

      It will halt and return true if and only if P halts on x. Why?

      • R(w) always does the same thing, regardless of the actual input w. Thus if there exists some w for which R(w) halts, then R halts no matter what input is given. Likewise, if there exists some w for which R(w) does not halt, then R does not halt on any input.
  • Since both parts of our implementation of Halt will halt and together they determine whether P(x) would halt if run, we have a solution to the halting problem. This is a contradiction! Since we know the halting problem to be undecideable, the core assumption that allowed us to implement Halt(P,x) must be false.

  • Thus, HaltAny is undecidable.

In Practice

In practice, programming language tools “solve” a lot of undecidable problems by being conservative (a.k.a., giving up) and answering “yes”, “no”, or “I don’t know”. Oftentimes, the “maybe” category is lumped together with one of “yes” or “no”. We will return to this idea later in the course when we examine static type systems.