This note is based on lecture notes for the Caltech course Math 6c, prepared with A. Kechris and M. Shulman.
We would like to have a mechanical procedure (algorithm) for checking whether a given set of formulas logically implies another, that is, given , whether
is a tautology (i.e., it is true under all truth-value assignments.)
This happens iff
So it suffices to have an algorithm to check the (un)satisfiability of a single propositional formula. The method of truth tables gives one such algorithm. We will now develop another method which is often (with various improvements) more efficient in practice.
It will be also an example of a formal calculus. By that we mean a set of rules for generating a sequence of strings in a language. Formal calculi usually start with a certain string or strings as given, and then allow the application of one or more “rules of production” to generate other strings.
A formula is in conjunctive normal form iff it has the form
where each has the form
and each is either a propositional variable, or its negation. So is in conjunctive normal form iff it is a conjunction of disjunctions of variables and negated variables. The common terminology is to call a propositional variable or its negation a literal.
Suppose is a propositional statement which we want to test for satisfiability. First we note (without proof) that although there is no known efficient algorithm for finding in cnf (conjunctive normal form) equivalent to , it is not hard to show that there is an efficient algorithm for finding in cnf such that:
is satisfiable iff is satisfiable.
(But, in general, has more variables than .)
So from now on we will only consider s in cnf, and the Resolution Method applies to such formulas only. Say
with literals. Since order and repetition in each conjunct (1):
Such a set of literals is called a clause. It corresponds to the formula (1). So the formula above can be simply written as a set of clauses (again since the order of the conjunctions is irrelevant):
Satisfiability of means then simultaneous satisfiability of all of its clauses , i.e., finding a valuation which makes true for each , i.e., which for each makes some true.
From now on we will deal only with a set of clauses . It is possible to consider also infinite sets , but we will not do that here.
Satisfying means (again) that there is a valuation which satisfies all , i.e. if , then for all there is so that it makes true.
Notice that if the set of clauses is associated as above to (in cnf) and to , then
By convention we also have the empty clause , which contains no literals. The empty clause is (by definition) unsatisfiable, since for a clause to be satisfied by a valuation, there has to be some literal in the clause which it makes true, but this is impossible for the empty clause, which has no literals. For a literal , let denote its “conjugate”, i.e. if and if
Definition 1 Suppose now are three clauses. We say that is a resolvent of if there is a such that , and
We allow here the case , i.e. .
Proposition 2 If is a resolvent of , then any assignment of truth values that makes both and true also makes true. (We view here as formulas.)
Proof: Suppose a valuation satisfies both and let be the literal used in the resolution. If , then since we clearly have and so . If , then , so .
Definition 3 Let now be a set of clauses. A proof by resolution from is a sequence of clauses such that each is either in or else it is a resolvent of some with . We call the goal or conclusion of the proof. If , we call this a proof by resolution of a contradiction from or simply a refutation of .
Example 2 Let . Then the following is a refutation of :
(resolvent of (by ))
(resolvent of (by ))
(resolvent of (by )).
Example 3 Let . The following is arefutation of :
(This proof is not unique. For example, we could move before and get another proof.)
The goal of proofs by resolution is to prove unsatisfiability of a set of clauses. The following theorem tells us that they achieve their goal.
The argument below uses the technique of mathematical induction, that we will study later. You do not need to read this proof now, but I am including it here so we can use it when the time comes. Feel free to stop by my office if you read the argument and have questions about it.
Proof: : This is usually called “Soundness of the proof system”. (“Soundness” is another word for “correctness”.)
Let be a proof of resolution from . Then we can easily prove, by induction on , that any assignment making all the true, must also make true. But if , then is unsatisfiable, and therefore must also be unsatisfiable.
: This is usually called “Completeness of the proof system”.
First we can assume that has no clause which contains, for some literal , both and (since such a clause can be dropped from without affecting its satisfiability).
Notation. If is a literal, let be the set of clauses resulting from by canceling every occurrence of within a clause of and eliminating all clauses of containing (this effectively amounts to setting ).
Example. Let . Then
Note that do not occur in . Note also that if is unsatisfiable, so are . Because if is a valuation satisfying , then, since does not contain , we can assume that does not assign a value to . Then the valuation which agrees on all other variables with and gives satisfies . Similarly for .
So assume is unsatisfiable, in order to construct a refutation of . Say that all the propositional variables occurring in clauses in are among . We prove then the result by induction on . In other words, we show that for each , if is a finite set of clauses containing variables among and is unsatisfiable, there is a refutation of .
In this case, we must have , and hence we have the refutation .
Assume this has been proved for sets of clauses with variables among and consider a set of clauses with variables among . Let .
Then are also unsatisfiable and do not contain , so by induction hypothesis there is a refutation for and a refutation for .
Consider first . Each clause is in or comes as a resolvent of two previous clauses. Define then recursively , so that either or .
If , then it is either in and then we put or else is obtained from some by dropping , i.e., . Then put .
The other case is where for some , we have that is a resolvent of , and thus by induction are already defined. The variable used in this resolution is in , so we can use this variable to resolve from to get .
Thus or , and is a proof by resolution from . If we are done, so we can assume that , i.e., is a proof by resolution from . Similarly, working with , we can define , a proof by resolution from with or . If we are done, otherwise is a proof by resolution from . Then
is a refutation from . This completes the proof.
The following is a refutation from :
It leads to the following “proof” by resolution from :
Similarly, the refutation from :
leads to the following “proof” by refutation from :
Finally, putting the two “proofs” together, we obtain a proof by resolution from by adding a last line at the end:
Remark. Notice that going from to variables “doubles” the length of the proof, so this gives an exponential bound for the refutation.
Remark. The method of refutation by resolution is non-deterministic–there is no unique way to arrive at it. Various strategies have been devised for implementing it.
One is by following the recursive procedure used in the proof of theorem 4. Another is by brute force. Start with a finite set of clauses . Let . Let together with all clauses obtained by resolving all possible pairs in together with all clauses obtained by resolving all possible pairs from , etc. Since any set of clauses whose variables are among cannot have more than elements, this will stop in at most many steps. Put . If then we can produce a refutation proof of about that size (i.e., ). Otherwise, and is satisfiable.
Other strategies are more efficient in special cases, but none are known to work in general.
Typeset using LaTeX2WP. Here is a printable version of this post.