This note is based on lecture notes for the Caltech course Math 6c, prepared with A. Kechris and M. Shulman.

**1. Decision problems **

Consider a finite alphabet , and “words” on that alphabet (the “alphabet” may consist of digits, of abstract symbols, of actual letters, etc).

We use the notation to indicate the set of all “words” from the alphabet . Here, a word is simply a finite sequence of symbols from . For example, if is the usual alphabet, then

would be a word.

We are also given a set of words, and we say that the words in are *valid*. ( may be infinite.)

In the *decision problem* associated to , we are given as **input** a word in this alphabet. As **output** we say yes or no, depending on whether the word is in or not (i.e., whether it is “valid”).

We are interested in whether there is an *algorithm* that allows us to decide the right answer.

For example:

- may be the set of
*prime numbers*. As input we are given a positive integer . An algorithm to see whether is prime consists on dividing by and checking whether the answer is ever exact. If it is (i.e., if we find some number that divides ), then is not prime, and we return No. If we do not find such number , then is prime and we return Yes. - may be the set of propositional tautologies. As input we are given a propositional formula . An algorithm to see if is a tautology consists in writing down the truth table for and seeing if we always get truth value . If so, we return Yes.
- The “alphabet” could consists of the names for American cities. may be a set of pairs for which we can travel by plane from to spending under $300. Websites like Expedia try to implement algorithms that allow us to solve the associated decision problem.
- Using the usual alphabet together with the usual keyboard symbols, let be the set of computer programs in language that eventually stop if executed with input . In this case there is no algorithm to see whether a given computer program is in . In computer science this is called the
*halting problem*. - If is the set of mathematical statements that are true, there is no algorithm either. This means that there is no mechanical way of checking, given a mathematical statement , whether it holds. ( could be “” or Fermat’s last theorem or “There are integers and such that ” or …)

Of course, one way of verifying the truth of could be to find a proof of or a proof of . One could simply have a program that writes down all possible proofs, and if it ever writes down a proof of either or , then we stop and have our answer.

However, this method does not work in general. A remarkable result of Gödel from 1930 says that, *no matter what axioms we begin with*, there are mathematical truths that can neither be proved nor disproved. This is his famous *Incompleteness theorem*.

If an algorithm exists for the decision problem corresponding to a set , we say that is *decidable*. Otherwise we call it *undecidable*.

Thus, modulo our rather vague definition of “algorithm,” we have the first main classification of (decision) problems as decidable or undecidable. Our next goal will be to classify further the decidable problems according to the (time) complexity of their algorithms and distinguish between those for which an efficient (feasible) algorithm is possible, *the tractable problems*, and those for which it is not, the *intractable* problems.

There is a mathematical way of making precise the notion of algorithm, using “Turing machines.” We will not need the precise definition in what follows. For information on Alan Turing, please visit http://www.mathcomp.leeds.ac.uk/turing2012/

Other approaches are possible, and go by the common name of “models of computation.” It is a remarkable fact that the very different models that have been considered are all actually equivalent.

**2. The Class **

The study of efficiency of algorithms is also called *computational complexity*. We will concentrate here on time complexity (how long it takes to solve the problem), but one can also discuss, for example, space complexity (how much “memory” storage is used in solving it). A natural measure of time complexity for us is the number of steps an algorithm requires to solve a given problem.

Since different problems require different amounts of input, in order to compare the complexities of different algorithms and problems, we must measure complexity as a function of the input size. In addition, with computer speeds increasing rapidly, small instances of a problem can usually be solved no matter how difficult the problem is. For this reason, and also to eliminate the effect of “overhead” time, we will consider only the *asymptotic* complexity as the size of the input increases. First we recall the “big- notation,” which gives a rigorous way of comparing asymptotic growth rates.

Definition 1Given two functions we define

For example, if is a polynomial, then for large enough .

Definition 2Given any , we let consist of all decision problems which can be decided by an algorithm in time for some .

More precisely, a decision problem is in if there is and an algorithm on some alphabet such that for :

- on input , halts in steps with output
- on input , halts in steps with output

(here = length of the word ).

It is important to remark here that, even though it is standard convention, the notation is somewhat imprecise, because there are many different functions , all of which are for the same , so saying and does not mean that , the symbol in “” is just a convention, and should not be confused with any kind of actual equality.

What sort of growth rate should we require of an algorithm to consider it manageable? After all, we must expect the time to increase somewhat with the input size. It turns out that the major distinctions in complexity are between “polynomial time” algorithms and faster-growing ones, such as exponentials.

Definition 3A decision problem is in the class (or it ispolynomially decidable) if it is for some , i.e. it can be decided in polynomial time.

Problems in the class are considered *tractable* (efficiently decidable) and the others *intractable*. It is clear that the class provides an upper limit for problems that can be algorithmically solved in realistic terms. If a problem is in , however, it does not necessarily mean that an algorithm for it can be practically implemented, for example, it could have time complexity of the order of or of the order but with enormous coefficients. However, most natural problems that have been shown to be polynomially decidable have been found to have efficient (e.g., very low degree) algorithms. Moreover, the class of problems in behaves well mathematically, and is independent of the model of computation, since any two formal models of computation can be mutually simulated within polynomial time.

It is important to remark that time complexity, as explained here, is a worst case analysis. If a problem is intractable, then there is no polynomial time algorithm which for all and all inputs of length will decide . But one can still search for approximate algorithms that work well on the average or for most practical (e.g., small) instances of the problem or give the correct answer with high probability, etc.

Example 1LINEAR PROGRAMMING is the decision problem given by:An input is an integer matrix for and , along with integer vectors and , and an integer .

The question we want to decide is whether there a rational vector such that , for , and ?

LINEAR PROGRAMMING turns out to be in . This is a difficult result. It was proved by Khachian, in 1979.

**3. The Class and the Problem **

Although a large class of problems have been classified as tractable or intractable, there is a vast collection of decision problems, many of them of great practical importance, that are widely assumed to be intractable but no one until now has been able to demonstrate it. These are the so-called *-complete problems*. In order to introduce these problems, we must first define the class of *non-deterministic polynomial* decision problems.

Definition 4Let be a decision problem. We say that is in the class if there is an algorithm on an alphabet and a polynomial such that for any : such that and on input , stops with output after at most many steps.

In other words, iff there is a “guess” , of length bounded by a polynomial in the length of , such that together with pass a polynomial acceptance test. (This can be also viewed as a non-deterministic polynomial time algorithm.)

Example 2SATISFIABILITY, the decision problem of whether a propositional sentence is satisfiable, is in , since if we can guess a truth assignment, we can verify that it satisfies the given set of clauses in polynomial time.

Example 3TRAVELING SALESMAN is the following decision problem: We are given a finite collection of cities , together with information about the distance between them, being the distance between and . We are also given a bound .We want to decide whether here is a way to travel through all the cities and returning to the original point, without actually traveling more than distance , i.e., whether there is a way to order the cities, say so that we have

It turns out that TRAVELING SALESMAN is also in , since if we guess the order of the set of cities, we can calculate whether the length of the tour is in polynomial time.

In fact a vast number of problems like these, which involve some kind of search, are in . Clearly . The most famous problem in theoretical computer science is whether

that is, whether any problem with an efficient nondeterministic algorithm also has an efficient deterministic algorithm. This is known, unsurprisingly, as the * Problem*. Recently the Clay Mathematics Institute included the Problem as one of its seven Millenium Prize Problems, offering $1,000,000 for its solution. The prevailing assumption today is that .

There are many excellent resources on the web with detailed information on this problem. I highly recommend Scott Aaronson’s blog, Shtetl-Optimized, http://www.scottaaronson.com/blog/ See for example his lectures on “Great Ideas in Theoretical Computer Science.”

**4. -Complete Problems **

We can get a better understanding of the problem by discussing the notion of an -complete problem. The -complete problems form sort of a “core” of the “most difficult” problems in . The key notion is the following:

Definition 5A (total) function , where are finite alphabets, ispolynomial-time computableif there is an algorithm on a finite alphabet , and a polynomial , such that for every input , terminates on at most steps with output .

Unsurprisingly, a *polynomial-time reduction* is a computable reduction which is in addition polynomial-time computable.

Definition 6A decision problem in some alphabet is-completeif it is in , and for every problem (in some alphabet ) there is a polynomial-time reduction of to . (That is, there is a polynomial-time computable function such that if and only if .)

An -complete problem is in some sense a hardest possible problem in . For example, it is clear that if is in and can be polynomial-time reduced to , then also is in . It therefore follows that if *any* -complete problem is in , then . Thus the question is equivalent to the question of whether any given -complete problem is in .

It is clear from the definition that all -complete problems are “equivalent” in some sense, since each one can be reduced to the other by a polynomial-time computable function. The first example of an -complete problem is due to Cook and Levin in 1971.

Theorem 7 (Cook, Levin)SATISFIABILITYis -complete.

In a sense, this suggests that there may not be any way of solving SATISFIABILITY than by means of the very slow algorithm of writing down truth tables. However, there are polynomial-time algorithms that solve SATISFIABILITY in many (but not all) situations. This is a problem with many practical applications, and therefore “SAT-solving engines” that are “in general” fast are of great interest.

Karp in 1972 has shown that TRAVELING SALESMAN and many other combinatorial problems are -complete, and since that time hundreds of others have been discovered in many areas of mathematics and computer science.

*Typeset using LaTeX2WP. Here is a printable version of this post.*

[**Edit (Feb. 28, 2012):** See also *Search and Destroy* (in Spanish), by Javier Moreno.]