For the first lecture, see here.

**II**

In this lecture, we prove:

Universality Theorem.If is a weak extender model for is supercompact’, and is elementary with , then .

As mentioned before, this gives us that absorbs a significant amount of strength from . For example:

Lemma.Suppose that is 2-huge. Then, for each There is a proper class of huge cardinals witnessed by embeddings that cohere .

Hence, if and , then

There is a proper class of huge cardinals.

Here, coherence means the following: coheres a set iff, letting , we have and . Actually, we need much less. We need something like and for hugeness, already suffices.

This methodology breaks down past -hugeness. Then we need to change the notion of coherence, since (for example, beginning with ) to have is no longer a reasonable condition. But suitable modifications still work at this very high level.

The proof of the universality theorem builds on a reformulation of supercompactness in terms of extenders, due to Magidor:

Theorem(Magidor).The following are equivalent:

is supercompact.For all and all , there are and , and an elementary such that:

and .and .

The proof is actually a straightforward reflection argument.

**Proof.** Suppose that item 2. fails, as witnessed by . Pick a normal fine on where , and consider

.

Then , , and . But then , and, by elementarity, are counterexamples to item 2. in with respect to . However, , and it witnesses item 2. in for with respect to . Contradiction.

Assume item 2. For any we need to find a normal fine on . Fix , and let and . Let be an embedding as in item 2. for . Use to define a normal fine on by

iff .

Note that , so this definition makes sense. Further, , so . Hence, is in the domain of , and is as wanted.

As mentioned in the previous lecture, it was expected for a while that Magidor’s reformulation would be the key to the construction of inner models for supercompactness, since it suggests which extenders need to be put in their sequence. Recent results indicate now that the construction should instead proceed directly with extenders derived from the normal fine measures. However, Magidor’s reformulation is very useful for the theory of weak extender models, thanks to the following fact, that can be seen as a strengthening of this reformulation:

Lemma.Suppose is a weak extender model for ` is supercompact’. Suppose and . Then there are in and an elementary such that:

, , , and ...

Again, the proof is a reflection argument as in Magidor’s theorem, but we need to work harder to ensure items 2. and 3. The key is:

Claim.Suppose . Then there is a normal fine on such that

The transitive collapse of is , where is the transitive collapse of .

**Proof.** We may assume that and that this also holds in . In , pick a bijection between and , and find on with and .

It is enough to check

The transitive collapse of is a rank initial segment of .

Once we have , it is easy to use the bijection between and to obtain the desired measure .

To prove , work in , and note that the result is now trivial since, letting be the ultrapower embedding induced by the restriction of to , we have that collapses to , which is an initial segment of .

**Proof of the lemma.** The argument is now a straightforward elaboration of the proof of Magidor’s theorem, using the claim just established. Namely, in the proof of of the Theorem, use an ultrafilter as in the claim. We need to see that (the restriction to of) the ultrapower embedding satisfies . We begin with much larger than such that , and fix sets such that , and a bijection such that is a bijection between and and .

We use to transfer to a measure on concentrating on . Now let be the ultrapower embedding. We need to check that . The issue is that, in principle, could overspill and be larger. However, since concentrates on , this is not possible, because transitive collapses are computed the same way in , , and , even though may differ from .

We are ready for the main result of this lecture.

**Proof of the Universality Theorem.** We will actually prove that for all cardinals , if

is elementary, and , then .

This gives the result as stated, through some coding.

Choose much larger than , and let . Apply the strengthened Magidor reformulation, to obtain , and , and an embedding

with , , , and .

Note that .

It is enough to show that , since , and so as well.

For this, we actually only need to show that , since the fragment of determines completely. The advantage, of course, is that it is easier to analyze sets of ordinals.

Let with , and let . We need to compute in whether . For this, note that

iff .

Now, , so this reduces to , i.e., to compute , it suffices to know .

Recall that , and consider . Note that , and . Applying to , and using elementarity, we have

.

But because , while .

It follows that . Since , we have (simply note the range of ), and we are done, because we have reduced the question of whether to the question of whether , which can determine.

Note how the Universality Theorem suggests that the construction of models for supercompactness using Magidor’s reformulation runs into difficulties; namely, if is supercompact, we have many extenders with critical point and , and we are now producing new extenders above , that should somehow also be accounted for in .

A nice application of universality is the dichotomy theorem for mentioned at the end of last lecture. If is a weak extender model for supercompactness, we obtain the following:

Corollary.There is no sequence of (non-trivial) elementary embeddings with well-founded limit.

It follows that there is a -definable ordinal such that any embedding fixing this ordinal is the identity! This is because where is the -theory in of the ordinals.

In particular, there is no . Note that the corollary and this fact fail if is replaced by an arbitrary weak extender model.

The question of whether there can actually be embeddings in a sense is still open, i.e., its consistency has currently only been established from the assumption in that there are very strong versions of Reinhardt cardinals, i.e., strong versions of embeddings , the consistency of which is in itself problematic.

(On the other hand, Hugh has shown that there are no embeddings , and this can be established by an easy variant of Hugh’s proof of Kunen’s theorem as presented, for example, in Kanamori’s book (Second proof of Theorem 23.12).)

[…] Next lecture. 43.614000 -116.202000 […]

[…] For the second lecture, see here. […]

[…] For the second lecture, see here. […]

[…] Next lecture. […]