Set theory seminar – Marion Scheepers: Coding strategies (IV)

October 12, 2010

For the third talk (and a link to the second one), see here. The fourth talk took place on October 12.

We want to show the following version of Theorem 2:

Theorem. Suppose \kappa is a singular strong limit cardinal of uncountable cofinality. Then the following are equivalent:

  1. For each ideal J on \kappa, player II has a winning coding strategy in RG(J).
  2. 2^\kappa<\kappa^{+\omega}.

Since 2^\kappa has uncountable cofinality, option 2 above is equivalent to saying that the instance of {\sf wSCH} corresponding to \kappa holds.

Before we begin the proof, we need to single out some elementary consequences in cardinal arithmetic of the assumptions on \kappa. First of all, since \kappa is singular strong limit, then for any cardinal \lambda<\kappa, we have that

\kappa^\lambda=\left\{\begin{array}{cl}\kappa&\mbox{\ if }\lambda<{\rm cf}(\kappa),\\ 2^\kappa&\mbox{\ otherwise.}\end{array}\right.

Also, since the cofinality of \kappa is uncountable, we have Hausdoff’s result that if n<\omega, then (\kappa^{+n})^{\aleph_0}=\kappa^{+n}. I have addressed both these computations in my lecture notes for Topics in Set Theory, see here and here.

We are ready to address the Theorem.

Proof. (2.\Rightarrow 1.) We use Theorem 1. If option 1. fails, then there is an ideal J on \kappa with {\rm cf}(\left< J\right>,{\subset})>|J|.

Note that {\rm cf}(\left< J\right>,{\subset})\le({\rm cf}(J,{\subset}))^{\aleph_0}, and \kappa\le|J|. Moreover, if \lambda<\kappa, then 2^\lambda<{\rm cf}(J,{\subset}) since, otherwise,

({\rm cf}(J,{\subset}))^{\aleph_0}\le 2^{\lambda\aleph_0}=2^\lambda<\kappa.

So {\rm cf}(J,{\subset})\ge\kappa and then, by Hausdorff, in fact {\rm cf}(J,{\subset})\ge \kappa^{+\omega}, and option 2. fails.

(1.\Rightarrow 2.) Suppose option 2. fails and let \lambda=\kappa^{+\omega}, so \kappa<\lambda<2^\kappa and {\rm cf}(\lambda)=\omega. We use \lambda to build an ideal J on \kappa with {\rm cf}(\left< J\right>,{\subset})>|J|.

For this, we use that there is a large almost disjoint family of functions from {\rm cf}(\kappa) into \kappa. Specifically:

Lemma. If \kappa is singular strong limit, there is a family {\mathcal F}\subseteq{}^{{\rm cf}(\kappa)}\kappa with {}|{\mathcal F}|=2^\kappa and such that for all distinct f,g\in{\mathcal F}, we have that {}|\{\alpha<{\rm cf}(\kappa)\mid f(\alpha)=g(\alpha)|<{\rm cf}(\kappa).

In my notes, I have a proof of a general version of this result, due to Galvin and Hajnal, see Lemma 12 here; essentially, we list all functions f:{\rm cf}(\kappa)\to\kappa, and then replace them with (appropriate codes for) the branches they determine through the tree \kappa^{{\rm cf}(\kappa)}. Distinct branches eventually diverge, and this translates into the corresponding functions being almost disjoint.

Pick a family {\mathcal F} as in the lemma, and let {\mathcal G} be a subfamily of size \lambda. Let S=\bigcup{\mathcal G}\subseteq{\rm cf}(\kappa)\times\kappa. We proceed to show that |S|=\kappa and use {\mathcal G} to define an ideal J on S as required.

First, obviously |S|\le\kappa. Since \kappa<\lambda=|{\mathcal G}| and {\mathcal G}\subseteq{\mathcal P}(S), it follows that {}|S|\ge\kappa, or else {}|{\mathcal P}(S)|<\kappa, since \kappa is strong limit.

Now define

J=\{X\subseteq S\mid\exists {\mathcal H}\subseteq{\mathcal G}\,(|{\mathcal H}|<\omega,\bigcup{\mathcal H}\supseteq X)\}.

Clearly, J is an ideal. We claim that |J|=\lambda. First, each singleton \{f\} with f\in{\mathcal G} is in J, so {}|J|\ge\lambda. Define \Phi:[{\mathcal G}]^{<\aleph_0}\to J by \Phi({\mathcal H})=\bigcup{\mathcal H}). Since the functions in {\mathcal G} are almost disjoint, it follows that \Phi is 1-1. Let G be the image of \Phi. By construction, G is cofinal in J. But then

{}|J|\le|{\mathcal G}|2^{{\rm cf}(\kappa)}=\lambda 2^{{\rm cf}(\kappa)}=\lambda,

where the first inequality follows from noticing that any X\in J has size at most {\rm cf}(\kappa). It follows that |J|=\lambda, as claimed.

Finally, we argue that {\rm cf}(\left< J\right>,{\subset})>\lambda, which completes the proof. For this, consider a cofinal {\mathcal A}\subseteq\left< J\right>, and a map f:{\mathcal A}\to[{\mathcal G}]^{\le\aleph_0} such that for all A\in{\mathcal A}, we have A\subseteq\bigcup f(A).

Since {\mathcal A} is cofinal in \left< J\right>, it follows that f[{\mathcal A}] is cofinal in {}[{\mathcal G}]^{\le\aleph_0}. But this gives the result, because

{}|{\mathcal A}|\ge{\rm cf}([{\mathcal G}]^{\le \aleph_0},{\subset})={\rm cf}([\lambda]^{\le \aleph_0},{\subset})>\lambda,

and we are done. \Box


Set theory seminar – Marion Scheepers: Coding strategies (III)

September 28, 2010

For the second talk (and a link to the first one), see here. The third talk took place on September 28.

In the second case, we fix an X\in J with {}|J(X)|<|J|. We can clearly assume that S is infinite, and it easily follows that {}|{\mathcal P}(X)|=|J|. This is because any Y\in J can be coded by the pair (Y\cap X,X\cup(Y\setminus X)), and there are only {}|J(X)| many possible values for the second coordinate.

In particular, X is infinite, and we can fix a partition X=\bigsqcup_n X _n of X into countably many pieces, each of size {}|X|. Recall that we are assuming that \text{cf}(\left< J\right>,{\subset})\le|J| and have fixed a set H cofinal in \left< J\right> of smallest possible size. We have also fixed a perfect information winning strategy \Psi for II, and an f:\left< J\right>\to H with A\subseteq f(A) for all A.

For each n, fix a surjection f:{\mathcal P}(X_n)\setminus\{\emptyset,X_n\}\to{}^{<\omega}H.

We define F:J\times\left< J\right>\to J as follows:

  1. Given O\in \left< J\right>, let

    A=\Psi(\left<f(O)\right>)\setminus X,

    and

    B\in{\mathcal P}(X_0)\setminus\{\emptyset, X_0\} such that f_0(B)=\left< f(O)\right>,

    and set F(\emptyset,O)=A\cup B.

  2. Suppose now that (T,O)\in J\times\left< J\right>, that T\ne\emptyset , and that there is an n such that \bigcup_{j<n} X_j\subseteq T, T\cap X_n\ne\emptyset,X_n, and T\cap\bigcup_{k>n}X_k=\emptyset. Let

    B\in{\mathcal P}(X_{n+1})\setminus\{\emptyset,X_{n+1}\} be such that f_{n+1}(B)=f_n(T\cap X_n){}^\frown\left< f(O)\right>,

    and

    A=\Psi(f_{n+1}(B))\setminus X

    and set F(T,O)=A\cup\bigcup_{j\le n}X_j\cup B.

  3. Define F(T,O)=\emptyset in other cases.

A straightforward induction shows that F is winning. The point is that in a run of the game where player II follows F:

  • Player II’s moves code the part that lies outside of X of player II’s moves in  a run {\mathcal A} of the game following \Psi where I plays sets covering the sets in the original run. For this, note that at any inning there is a unique index n such that player II’s move covers \bigcup_{j<n}X_j, is disjoint from \bigcup_{j>n}X_j, and meets X_n in a set that is neither empty nor all of X_n, and this n codes the inning of the game, and the piece of player II’s move in X_n codes the history of the run {\mathcal A} played so far.
  • X is eventually covered completely, so in particular the parts inside X of player II’s responses in the run {\mathcal A} are covered as well.

This completes the proof of Theorem 1. \Box

By way of illustration, consider the case where J is the ideal of finite sets of some set S. Then whether II has a winning coding strategy turns into the question of when it is that \text{cf}({\mathcal P}_{\aleph_1}(S))\le|S|. This certainly holds if {}|S|={\mathfrak c} or if {}|S|<\aleph_\omega. However, it fails if {}|S|=\aleph_\omega.

This example illustrates how player II really obtains an additional advantage when playing in WMG(J) rather than just in RG(J). To see that this is the case, consider the same J as above with {}|S|=\aleph_\omega. This is an instance of the countable-finite game. We claim that II has a winning coding strategy in this case. To see this, consider a partition of S into countably many sets S_n with {}|S_n|=\aleph_n. For each n, pick a winning coding strategy \sigma_n for the countable-finite game on S_n, and define a strategy in WMG(J) so that for each n it simulates a run of the game WMG(J\cap{\mathcal P}(S_n) with II following \sigma_n, as follows: In inning n, II plays on S_i for i\le n; player I’s moves in the “i-th board” are the intersection with S_i of I’s moves in WMG(J), and I’s first move occurred at inning i. (II can keep track of n in several ways, for example, noticing that, following the proof of Theorem 1 produces coding strategies that never play the empty set.)

Note that this strategy is not winning in RG(J), the difference being that there is no guarantee that (for any i) the first i moves of I in the (i+1)-st board are going to be covered by II’s responses. On the other hand, the strategy is winning in WMG(J), since, no matter how late one starts to play on the i-th board, player I’s first move covers I’s prior moves there (and so, II having a winning coding strategy for the game that starts with this move, will also cover those prior moves).

The first place where this argument cannot be continues is when |S|=\aleph_{\omega_1}. However, {\sf GCH} suffices to see that player I has a winning strategy in RG(J) in this case, and so we can continue. This illustrates the corollary stated in the first talk, that {\sf GCH} suffices to guarantee that II always has a winning coding strategy in WMG(J).

The natural question is therefore how much one can weaken the {\sf GCH} assumption, and trying to address it leads to Theorem 2, which will be the subject of the next (and last) talk.


Set theory seminar – Marion Scheepers: Coding strategies (II)

September 27, 2010

For the first talk, see here. The second talk took place on September 21.

We want to prove (2.\Rightarrow1.) of Theorem 1, that if {\rm cf}(\left< J\right>,\subset)\le|J|, then II has a winning coding strategy in RG(J).

The argument makes essential use of the following:

Coding Lemma. Let ({\mathbb P},<) be a poset such that for all p\in{\mathbb P},

{}|\{q\in{\mathbb P}\mid q>p\}|=|{\mathbb P}|.

Suppose that {}|H|\le|{\mathbb P}|. Then there is a map \Phi:{\mathbb P}\to{}^{<\omega}H such that

\forall p\in{\mathbb P}\,\forall\sigma\in H\,\exists q\in{\mathbb P}\,(q>p\mbox{ and }\Phi(q)=\sigma).

Proof. Note that {\mathbb P} is infinite. We may then identify it with some infinite cardinal \kappa. It suffices to show that for any partial ordering \prec on \kappa as in the hypothesis, there is a map \Phi:\kappa\to\kappa such that for any \alpha,\beta, there is a \gamma with \alpha\prec\gamma such that \Phi(\gamma)=\beta.

Well-order \kappa\times\kappa in type \kappa, and call R this ordering. We define \Phi by transfinite recursion through R. Given (\alpha,\beta), let A be the set of its R-predecessors,

A=\{(\mu,\rho)\mid(\mu,\rho) R(\alpha,\beta)\}.

Our inductive assumption is that for any pair (\mu,\rho)\in A, we have chosen some \tau with \mu\prec\tau, and defined \Phi(\tau)=\rho.  Let us denote by D_A the domain of the partial function we have defined so far. Note that {}|D_A|<\kappa. Since \{\gamma\mid\alpha\prec\gamma\} has size \kappa, it must meet \kappa\setminus D_A. Take \mu to be least in this intersection, and set \Phi(\mu)=\beta, thus completing the stage (\alpha,\beta) of this recursion.

At the end, the resulting map can be extended to a map \Phi with domain \kappa in an arbitrary way, and this function clearly is as required. \Box

Back to the proof of (2.\Rightarrow1.). Fix a perfect information winning strategy \Psi for II in RG(J), and a set H cofinal in \left< J\right> of least possible size. Pick a f:\left< J \right>\to H such that for all A\in \left< J\right> we have A\subseteq f(A).

Given X\in J, let J(X)=\{Y\in J\mid X\subseteq Y\}. Now we consider two cases, depending on whether for some X we have {}|J(X)|<|J| or not.

Suppose first that |J(X)|=|J| for all X. Then the Coding Lemma applies with (J,\subset) in the role of {\mathbb P}, and H as chosen. Let \Phi be as in the lemma.

We define F:J\times\left< J\right>\to J as follows:

  1. Given O\in\left< J\right>, let Y\supseteq\psi(f(O)) be such that \Phi(Y)=\left<f(O\right>, and set F(\emptyset,O)=Y.
  2. Given (T,O)\in J\times\left< J\right> with T\ne\emptyset, let Y\supseteq \Psi(\Phi(T){}^\frown\left<f(O)\right>) be such that \Phi(Y)=\Phi(T){}^\frown\left< f(O)\right>, and set F(T,O)=Y.

Clearly, F is winning: In any run of the game with II following F, player II’s moves cover their responses following \Psi, and we are done since \Psi is winning.

The second case, when there is some X\in J with |J(X)|<|J|, will be dealt with in the next talk.


Set theory seminar – Marion Scheepers: Coding strategies (I)

September 25, 2010

This semester, the seminar started with a series of talks by Marion. The first talk happened on September 14.

We consider two games relative to a (proper) ideal J\subset{\mathcal P}(S) for some set S. The ideal J is not assumed to be \sigma-complete; we denote by \left< J\right> its \sigma-closure, i.e., the collection of countable unions of elements of J. Note that \left< J\right> is a \sigma-ideal iff \left< J\right> is an ideal iff S\notin\left< J\right>.

The two games we concentrate on are the Random Game on J, RG(J), and the Weakly Monotonic game on J, WMG(J).

In both games, players I and II alternate for \omega many innings, with I moving first, moving as follows:

\begin{array}{cccccc} I&O_0\in\left< J\right>&&O_1\in\left< J_2\right>&&\cdots\\ II&&T_0\in J&&T_1\in J \end{array}

In RG(J) we do not require that the O_i relate to one another in any particular manner (thus “random”), while in WMG(J) we require that O_1\subseteq O_2\subseteq\dots (thus “weakly”, since we allow equality to occur).

In both games, player II wins iff \bigcup_n T_n\supseteq\bigcup_n O_n. Obviously, II has a (perfect information) winning strategy, with = rather than the weaker \supseteq.

However, we are interested in an apparently very restrictive kind of strategy, and so we will give some leeway to player II by allowing its moves to over-spill if needed. The strategies for II we want to consider we call coding strategies. In these strategies, II only has access to player I’s latest move, and to its own most recent move. So, if F is a coding strategy, and II follows it in a run of the game, then we have that for every n,

T_n=F(T_{n-1},O_n),

with T_{-1}=\emptyset.

The underlying goal is to understand under which circumstances player II has a winning coding strategy in WMG(J). Obviously, this is the case if II has a winning coding strategy in RG(J).

Theorem 1. For an ideal J\subset{\mathcal P}(S), the following are equivalent:

  1. II has a winning coding strategy in RG(J).
  2. {\rm cf}(\left< J\right>,{\subset})\le|J|.

Corollary. {\sf GCH} implies that for any ideal J\subset{\mathcal P}(S), II has a winning strategy in WMG(J).

We can reformulate our goal as asking how much one can weaken {\sf GCH} in the corollary.

Let’s denote by {\sf wSCH}, the weak singular cardinals hypothesis, the statement that if \kappa is singular strong limit of uncountable cofinality, then for no cardinal \lambda of countable cofinality, we have \kappa<\lambda<2^\kappa.

By work of Gitik and Mitchell, we know that the negation of {\sf wSCH} is equiconsistent with the existence of a \kappa of Mitchell order o(\kappa)=\kappa^{+\omega}+\omega_1.

Theorem 2. The following are equivalent:

  1. {\sf wSCH}.
  2. For each ideal J on a singular strong limit \kappa of uncountable cofinality, II has a winning strategy in RG(J).

We now begin the proof of Theorem 1.

(1.\Rightarrow2.) Suppose II has a winning coding strategy F in RG(J). We want to show that {\rm cf}(\left< J\right>,{\subset})\le|J|. For this, we will define a map f:J\to\left< J\right> with \subset-cofinal range, as follows: Given X\in J, let T_0=X and T_{n+1}=F(T_n,\emptyset) for all n. Now set

f(X)=\bigcup_n T_n.

To see that f is cofinal, given O\in\left< J\right>, let X=F(\emptyset,O), so that the T_n are II’s responses using F in a run of the game where player I first plays O and then plays \emptyset in all its subsequent moves. Since F is winning, we must have f(X)\supseteq O.

Next talk.