Joyal’s Proof of Cayley’s Tree Formula

I wanted to quickly write this proof up, complete with pictures, so that I won’t forget it again. In this post I’ll give a combinatorial proof (due to Joyal) of the following:

Theorem 1 (Cayley’s Formula)

The number of trees on {n} labelled vertices is {n^{n-2}}.

Proof: We are going to construct a bijection between

  • Functions {\{1, 2, \dots, n\} \rightarrow \{1, 2, \dots, n\}} (of which there are {n^n}) and
  • Trees on {\{1, 2, \dots, n\}} with two distinguished nodes {A} and {B} (possibly {A=B}).

This will imply the answer.

Let’s look at the first piece of data. We can visualize it as {n} points floating around, each with an arrow going out of it pointing to another point, but possibly with many other arrows coming into it. Such a structure is apparently called a directed pseudoforest. Here is an example when {n = 9}.

cayley-pseudoforest

You’ll notice that in each component, some of the points lie in a cycle and others do not. I’ve colored the former type of points blue, and the corresponding arrows magenta.

Thus a directed pseudoforest can also be specified by

  • a choice of some vertices to be in cycles (blue vertices),
  • a permutation on the blue vertices (magenta arrows), and
  • attachments of trees to the blue vertices (grey vertices and arrows).

Now suppose we take the same information, but replace the permutation on the blue vertices with a total ordering instead (of course there are an equal number of these). Then we can string the blue vertices together as shown below, where the green arrows denote the selected total ordering (in this case {1 < 9 < 2 < 4 < 8 < 5}):

cayley-tree

This is exactly the data of a tree on the {n} vertices with two distinguished vertices, the first and last in the chain of green (which could possibly coincide). \Box

Advertisements

Positive Definite Quadratic Forms

I’m reading through Primes of the Form {x^2+ny^2}, by David Cox (link; it’s good!). Here are the high-level notes I took on the first chapter, which is about the theory of quadratic forms.

(Meta point re blog: I’m probably going to start posting more and more of these more high-level notes/sketches on this blog on topics that I’ve been just learning. Up til now I’ve been mostly only posting things that I understand well and for which I have a very polished exposition. But the perfect is the enemy of the good here; given that I’m taking these notes for my own sake, I may as well share them to help others.)

1. Overview

Definition 1

For us a quadratic form is a polynomial {Q = Q(x,y) = ax^2 + bxy + cy^2}, where {a}, {b}, {c} are some integers. We say that it is primitive if {\gcd(a,b,c) = 1}.

For example, we have the famous quadratic form

\displaystyle  Q_{\text{Fermat}}(x,y) = x^2+y^2.

As readers are probably aware, we can say a lot about exactly which integers can be represented by {Q_{\text{Fermat}}}: by Fermat’s Christmas theorem, the primes {p \equiv 1 \pmod 4} (and {p=2}) can all be written as the sum of two squares, while the primes {p \equiv 3 \pmod 4} cannot. For convenience, let us say that:

Definition 2

Let {Q} be a quadratic form. We say it represents the integer {m} if there exists {x,y \in \mathbb Z} with {m = Q(x,y)}. Moreover, {Q} properly represents {m} if one can find such {x} and {y} which are also relatively prime.

The basic question is: what can we say about which primes/integers are properly represented by a quadratic form? In fact, we will later restrict our attention to “positive definite” forms (described later).

For example, Fermat’s Christmas theorem now rewrites as:

Theorem 3 (Fermat’s Christmas theorem for primes)

An odd prime {p} is (properly) represented by {Q_{\text{Fermat}}} if and only if {p \equiv 1 \pmod 4}.

The proof of this is classical, see for example my olympiad handout. We also have the formulation for odd integers:

Theorem 4 (Fermat’s Christmas theorem for odd integers)

An odd integer {m} is properly represented by {Q_{\text{Fermat}}} if and only if all prime factors of {m} are {1 \pmod 4}.

Proof: For the “if” direction, we use the fact that {Q_{\text{Fermat}}} is multiplicative in the sense that

\displaystyle  (x^2+y^2)(u^2+v^2) = (xu \pm yv)^2 + (xv \mp yu)^2.

For the “only if” part we use the fact that if a multiple of a prime {p} is properly represented by {Q_{\text{Fermat}}}, then so is {p}. This follows by noticing that if {x^2+y^2 \equiv 0 \pmod p} (and {xy \not\equiv 0 \pmod p}) then {(x/y)^2 \equiv -1 \pmod p}. \Box
Tangential remark: the two ideas in the proof will grow up in the following way.

  • The fact that {Q_{\text{Fermat}}} “multiplies nicely” will grow up to become the so-called composition of quadratic forms.
  • The second fact will not generalize for an arbitrary form {Q}. Instead, we will see that if a multiple of {p} is represented by a form {Q} then some form of the same “discriminant” will represent the prime {p}, but this form need not be the same as {Q} itself.

2. Equivalence of forms, and the discriminant

The first thing we should do is figure out when two forms are essentially the same: for example, {x^2+5y^2} and {5x^2+y^2} should clearly be considered the same. More generally, if we think of {Q} as acting on {\mathbb Z^{\oplus 2}} and {T} is any automorphism of {\mathbb Z^{\oplus 2}}, then {Q \circ T} should be considered the same as {Q}. Specifically,

Definition 5

Two forms {Q_1} and {Q_2} said to be equivalent if there exists

\displaystyle  T = \begin{pmatrix} p & q \\ r & s \end{pmatrix} \in \text{GL }(2,\mathbb Z)

such that {Q_2(x,y) = Q_1(px+ry, qx+sy)}. We have {\det T = ps-qr = \pm 1} and so we say the equivalence is

  • a proper equivalence if {\det T = +1}, and
  • an improper equivalence if {\det T = -1}.

So we generally will only care about forms up to proper equivalence. (It will be useful to distinguish between proper/improper equivalence later.)

Naturally we seek some invariants under this operation. By far the most important is:

Definition 6

The discriminant of a quadratic form {Q = ax^2 + bxy + cy^2} is defined as

\displaystyle  D = b^2-4ac.

The discriminant is invariant under equivalence (check this). Note also that we also have {D \equiv 0 , 1 \pmod 4}.

Observe that we have

\displaystyle  4a \cdot (ax^2+bxy+cy^2) = (2ax + by)^2 - Dy^2.

So if {D < 0} and {a > 0} (thus {c > 0} too) then {ax^2+bxy+cy^2 > 0} for all {x,y > 0}. Such quadratic forms are called positive definite, and we will restrict our attention to these forms.

Now that we have this invariant, we may as well classify equivalence classes of quadratic forms for a fixed discriminant. It turns out this can be done explicitly.

Definition 7

A quadratic form {Q = ax^2 + bxy + cy^2} is reduced if

  • it is primitive and positive definite,
  • {|b| \le a \le c}, and
  • {b \ge 0} if either {|b| = a} or {a = c}.

Exercise 8

Check there only finitely many reduced forms of a fixed discriminant.

Then the big huge theorem is:

Theorem 9 (Reduced forms give a set of representatives)

Every primitive positive definite form {Q} of discriminant is properly equivalent to a unique reduced form. We call this the reduction of {Q}.

Proof: Omitted due to length, but completely elementary. It is a reduction argument with some number of cases. \Box

Thus, for any discriminant {D} we can consider the set

\displaystyle  \text{Cl}(D) = \left\{ \text{reduced forms of discriminant } D \right\}

which will be the equivalence classes of positive definite of discriminant {D}. By abuse of notation we will also consider it as the set of equivalence classes of primitive positive definite forms of discriminant {D}.

We also define {h(D) = \left\lvert \text{Cl}(D) \right\rvert}; by the exercise, {h(D) < \infty}. This is called the class number.

Moreover, we have {h(D) \ge 1}, because we can take {x^2 - D/4 y^2} for {D \equiv 0 \pmod 4} and {x^2 + xy + (1-D)/4 y^2} for {D \equiv 1 \pmod 4}. We call this form the principal form.

3. Tables of quadratic forms

Example 10 (Examples of quadratic forms with {h(D) = 1}, {D \equiv 0 \pmod 4})

The following discriminants have class number {h(D) = 1}, hence having only the principal form:

  • {D = -4}, with form {x^2 + y^2}.
  • {D = -8}, with form {x^2 + 2y^2}.
  • {D = -12}, with form {x^2+3y^2}.
  • {D = -16}, with form {x^2 + 4y^2}.
  • {D = -28}, with form {x^2 + 7y^2}.

This is in fact the complete list when {D \equiv 0 \pmod 4}.

Example 11 (Examples of quadratic forms with {h(D) = 1}, {D \equiv 1 \pmod 4})

The following discriminants have class number {h(D) = 1}, hence having only the principal form:

  • {D = -3}, with form {x^2 + xy + y^2}.
  • {D = -7}, with form {x^2 + xy + 2y^2}.
  • {D = -11}, with form {x^2 + xy + 3y^2}.
  • {D = -19}, with form {x^2 + xy + 5y^2}.
  • {D = -27}, with form {x^2 + xy + 7y^2}.
  • {D = -43}, with form {x^2 + xy + 11y^2}.
  • {D = -67}, with form {x^2 + xy + 17y^2}.
  • {D = -163}, with form {x^2 + xy + 41y^2}.

This is in fact the complete list when {D \equiv 1 \pmod 4}.

Example 12 (More examples of quadratic forms)

Here are tables for small discriminants with {h(D) > 1}. When {D \equiv 0 \pmod 4} we have

  • {D = -20}, with {h(D) = 2} forms {2x^2 + 2xy + 3y^2} and {x^2 + 5y^2}.
  • {D = -24}, with {h(D) = 2} forms {2x^2 + 3y^2} and {x^2 + 6y^2}.
  • {D = -32}, with {h(D) = 2} forms {3x^2 + 2xy + 3y^2} and {x^2 + 8y^2}.
  • {D = -36}, with {h(D) = 2} forms {2x^2 + 2xy + 5y^2} and {x^2 + 9y^2}.
  • {D = -40}, with {h(D) = 2} forms {2x^2 + 5y^2} and {x^2 + 10y^2}.
  • {D = -44}, with {h(D) = 3} forms {3x^2 \pm 2xy + 4y^2} and {x^2 + 11y^2}.

As for {D \equiv 1 \pmod 4} we have

  • {D = -15}, with {h(D) = 2} forms {2x^2 + xy + 2y^2} and {x^2 + xy + 4y^2}.
  • {D = -23}, with {h(D) = 3} forms {2x^2 \pm xy + 3y^2} and {x^2+ xy + 6y^2}.
  • {D = -31}, with {h(D) = 3} forms {2x^2 \pm xy + 4} and {x^2 + xy + 8y^2}.
  • {D = -39}, with {h(D) = 4} forms {3x^2 + 3xy + 4y^2}, {2x^2 \pm 2xy + 5y^2} and {x^2 + xy + 10y^2}.

Example 13 (Even More Examples of quadratic forms)

Here are some more selected examples:

  • {D = -56} has {h(D) = 4} forms {x^2+14y^2}, {2x^2+7y^2} and {3x^2 \pm 2xy + 5y^2}.
  • {D = -108} has {h(D) = 3} forms {x^2+27y^2} and {4x^2 \pm 2xy + 7y^2}.
  • {D = -256} has {h(D) = 4} forms {x^2+64y^2}, {4x^2+4xy+17y^2} and {5x^2\pm2xy+13y^2}.

4. The Character {\chi_D}

We can now connect this to primes {p} as follows. Earlier we played with {Q_{\text{Fermat}} = x^2+y^2}, and observed that for odd primes {p}, {p \equiv 1 \pmod 4} if and only if some multiple of {p} is properly represented by {Q_{\text{Fermat}}}.

Our generalization is as follows:

Theorem 14 (Primes represented by some quadratic form)

Let {D < 0} be a discriminant, and let {p \nmid D} be an odd prime. Then the following are equivalent:

  • {\left( \frac Dp \right) = 1}, i.e. {D} is a quadratic residue modulo {p}.
  • The prime {p} is (properly) represented by some reduced quadratic form in {\text{Cl}(D)}.

This generalizes our result for {Q_{\text{Fermat}}}, but note that it uses {h(-4) = 1} in an essential way! That is: if {(-1/p) = 1}, we know {p} is represented by some quadratic form of discriminant {D = -4}\dots but only since {h(-4) = 1} do we know that this form reduces to {Q_{\text{Fermat}} = x^2+y^2}.

Proof: First assume WLOG that {p \nmid 4a} and {Q(x,y) \equiv 0 \pmod p}. Thus {p \nmid y}, since otherwise this would imply {x \equiv y \equiv 0 \pmod p}. Then

\displaystyle  0 \equiv 4a \cdot Q(x,y) \equiv (2ax + by)^2 - Dy^2 \pmod p

hence {D \equiv \left( 2axy^{-1} + b \right)^2 \pmod p}.

The converse direction is amusing: let {m^2 = D + pk} for integers {m}, {k}. Consider the quadratic form

\displaystyle  Q(x,y) = px^2 + mxy + ky^2.

It is primitive of discriminant {D} and {Q(1,0) = p}. Now {Q} may not be reduced, but that’s fine: just take the reduction of {Q}, which must also properly represent {p}. \Box

Thus to every discriminant {D < 0} we can attach the Legendre character (is that the name?), which is a homomorphism

\displaystyle  \chi_D = \left( \tfrac{D}{\bullet} \right) : \left( \mathbb Z / D\mathbb Z \right)^\times \rightarrow \{ \pm 1 \}

with the property that if {p} is a rational prime not dividing {D}, then {\chi_D(p) = \left( \frac{D}{p} \right)}. This is abuse of notation since I should technically write {\chi_D(p \pmod D)}, but there is no harm done: one can check by quadratic reciprocity that if {p \equiv q \pmod D} then {\chi_D(p) = \chi_D(q)}. Thus our previous result becomes:

Theorem 15 ({\ker(\chi_D)} consists of representable primes)

Let {p \nmid D} be prime. Then {p \in \ker(\chi_D)} if and only if some quadratic form in {\text{Cl}(D)} represents {p}.

As a corollary of this, using the fact that {h(-8) = h(-12) = h(-28) = 1} one can prove that

Corollary 16 (Fermat-type results for {h(-4n) = 1})

Let {p > 7} be a prime. Then {p} is

  • of the form {x^2 + 2y^2} if and only if {p \equiv 1, 3 \pmod 8}.
  • of the form {x^2 + 3y^2} if and only if {p \equiv 1 \pmod 3}.
  • of the form {x^2 + 7y^2} if and only if {p \equiv 1, 2, 4 \pmod 7}.

Proof: The congruence conditions are equivalent to {(-4n/p) = 1}, and as before the only point is that the only reduced quadratic form for these {D = -4n} is the principal one. \Box

5. Genus theory

What if {h(D) > 1}? Sometimes, we can still figure out which primes go where just by taking mods.

Let {Q \in \text{Cl}(D)}. Then it represents some residue classes of {(\mathbb Z/D\mathbb Z)^\times}. In that case we call the set of residue classes represented the genus of the quadratic form {Q}.

Example 17 (Genus theory of {D = -20})

Consider {D = -20}, with

\displaystyle  \ker(\chi_D) = \left\{ 1, 3, 7, 9 \right\} \subseteq (\mathbb Z/D\mathbb Z)^\times.

We consider the two elements of {\text{Cl}(D)}:

  • {x^2 + 5y^2} represents {1, 9 \in (\mathbb Z/20\mathbb Z)^\times}.
  • {2x^2+2xy+3y^2} represents {3, 7 \in (\mathbb Z/20\mathbb Z)^\times}.

Now suppose for example that {p \equiv 9 \pmod{20}}. It must be represented by one of these two quadratic forms, but the latter form is never {9 \pmod{20}} and so it must be the first one. Thus we conclude that

  • {p = x^2+5y^2} if and only if {p \equiv 1, 9 \pmod{20}}.
  • {p = 2x^2 + 2xy + 3y^2} if and only if {p \equiv 3, 7 \pmod{20}}.

The thing that makes this work is that each genus appears exactly once. We are not always so lucky: for example when {D = -108} we have that

Example 18 (Genus theory of {D = -108})

The two elements of {\text{Cl}(-108)} are:

  • {x^2+27y^2}, which represents exactly the {1 \pmod 3} elements of {(\mathbb Z/D\mathbb Z)^\times}.
  • {4x^2 \pm 2xy + 7y^2}, which also represents exactly the {1 \pmod 3} elements of {(\mathbb Z/D\mathbb Z)^\times}.

So the best we can conclude is that {p = x^2+27y^2} OR {p = 4x^2\pm2xy+7y^2} if and only if {p \equiv 1 \pmod 3} This is because the two distinct quadratic forms of discriminant {-108} happen to have the same genus.

We now prove that:

Theorem 19 (Genii are cosets of {\ker(\chi_D)})

Let {D} be a discriminant and consider the Legendre character {\chi_D}.

  • The genus of the principal form of discriminant {D} constitutes a subgroup {H} of {\ker(\chi_D)}, which we call the principal genus.
  • Any genus of a quadratic form in {\text{Cl}(D)} is a coset of the principal genus {H} in {\ker(\chi_D)}.

Proof: For the first part, we aim to show {H} is multiplicatively closed. For {D \equiv 0 \pmod 4}, {D = -4n} we use the fact that

\displaystyle  (x^2+ny^2)(u^2+nv^2) = (xu \pm nyv)^2 + n(xv \mp yu)^2.

For {D \equiv 1 \pmod 4}, we instead appeal to another “magic” identity

\displaystyle  4\left( x^2+xy+\frac{1-D}{4}y^2 \right) \equiv (2x+y)^2 \pmod D

and it follows from here that {H} is actually the set of squares in {(\mathbb Z/D\mathbb Z)^\times}, which is obviously a subgroup.

Now we show that other quadratic forms have genus equal to a coset of the principal genus. For {D \equiv 0 \pmod 4}, with {D = -4n} we can write

\displaystyle  a(ax^2+bxy+cy^2) = (ax+b/2 y)^2 + ny^2

and thus the desired coset is shown to be {a^{-1} H}. As for {D \equiv 1 \pmod 4}, we have

\displaystyle  4a \cdot (ax^2+bxy+cy^2) = (2ax + by)^2 - Dy^2 \equiv (2ax+by)^2 \pmod D

so the desired coset is also {a^{-1} H}, since {H} was the set of squares. \Box

Thus every genus is a coset of {H} in {\ker(\chi_D)}. Thus:

Definition 20

We define the quotient group

\displaystyle  \text{Gen}(D) = \ker(\chi_D) / H

which is the set of all genuses in discriminant {D}. One can view this as an abelian group by coset multiplication.

Thus there is a natural map

\displaystyle  \Phi_D : \text{Cl}(D) \twoheadrightarrow \text{Gen}(D).

(The map is surjective by Theorem~14.) We also remark than {\text{Gen}(D)} is quite well-behaved:

Proposition 21 (Structure of {\text{Gen}(D)})

The group {\text{Gen}(D)} is isomorphic to {(\mathbb Z/2\mathbb Z)^{\oplus m}} for some integer {m}.

Proof: Observe that {H} contains all the squares of {\ker(\chi_D)}: if {f} is the principal form then {f(t,0) = t^2}. Thus claim each element of {\text{Gen}(D)} has order at most {2}, which implies the result since {\text{Gen}(D)} is a finite abelian group. \Box

In fact, one can compute the order of {\text{Gen}(D)} exactly, but for this post I Will just state the result.

Theorem 22 (Order of {\text{Gen}(D)})

Let {D < 0} be a discriminant, and let {r} be the number of distinct odd primes which divide {D}. Define {\mu} by:

  • {\mu = r} if {D \equiv 1 \pmod 4}.
  • {\mu = r} if {D = -4n} and {n \equiv 3 \pmod 4}.
  • {\mu = r+1} if {D = -4n} and {n \equiv 1,2 \pmod 4}.
  • {\mu = r+1} if {D = -4n} and {n \equiv 4 \pmod 8}.
  • {\mu = r+2} if {D = -4n} and {n \equiv 0 \pmod 8}.

Then {\left\lvert \text{Gen}(D) \right\rvert = 2^{\mu-1}}.

6. Composition

We have already used once the nice identity

\displaystyle  (x^2+ny^2)(u^2+nv^2) = (xu \pm nyv)^2 + n(xv \mp yu)^2.

We are going to try and generalize this for any two quadratic forms in {\text{Cl}(D)}. Specifically,

Proposition 23 (Composition defines a group operation)

Let {f,g \in \text{Cl}(D)}. Then there is a unique {h \in \text{Cl}(D)} and bilinear forms {B_i(x,y,z,w) = a_ixz + b_ixw + c_iyz + d_iyw} for {i=1,2} such that

  • {f(x,y) g(z,w) = h(B_1(x,y,z,w), B_2(x,y,z,w))}.
  • {a_1b_2 - a_2b_1 = +f(1,0)}.
  • {a_1c_2 - a_2c_1 = +g(1,0)}.

In fact, without the latter two constraints we would instead have {a_1b_2 - a_2b_1 = \pm f(1,0)} and {a_1c_2 - a_2c_1 = \pm g(1,0)}, and each choice of signs would yield one of four (possibly different) forms. So requiring both signs to be positive makes this operation well-defined. (This is why we like proper equivalence; it gives us a well-defined group structure, whereas with improper equivalence it would be impossible to put a group structure on the forms above.)

Taking this for granted, we then have that

Theorem 24 (Form class group)

Let {D \equiv 0, 1 \pmod 4}, {D < 0} be a discriminant. Then {\text{Cl}(D)} becomes an abelian group under composition, where

  • The identity of {\text{Cl}(D)} is the principal form, and
  • The inverse of the form {ax^2+bxy+cy^2} is {ax^2-bxy+cy^2}.

This group is called the form class group.

We then have a group homomorphism

\displaystyle  \Phi_D : \text{Cl}(D) \twoheadrightarrow \text{Gen}(D).

Observe that {ax^2 + bxy + cy^2} and {ax^2 - bxy + cy^2} are inverses and that their {\Phi_D} images coincide (being improperly equivalent); this is expressed in the fact that {\text{Gen}(D)} has elements of order {\le 2}. As another corollary, the number of elements of {\text{Cl}(D)} with a given genus is always a power of two.

We now define:

Definition 25

An integer {n \ge 1} is convenient if the following equivalent conditions hold:

  • The principal form {x^2+ny^2} is the only reduced form with the principal genus.
  • {\Phi_D} is injective (hence an isomorphism).
  • {\left\lvert h(D) \right\rvert = 2^{\mu-1}}.

Thus we arrive at the following corollary:

Corollary 26 (Convenient numbers have nice representations)

Let {n \ge 1} be convenient. Then {p} is of the form {x^2+ny^2} if and only if {p} lies in the principal genus.

Hence the represent-ability depends only on {p \pmod{4n}}.

OEIS A000926 lists 65 convenient numbers. This sequence is known to be complete except for at most one more number; moreover the list is complete assuming the Grand Riemann Hypothesis.

7. Cubic and quartic reciprocity

To treat the cases where {n} is not convenient, the correct thing to do is develop class field theory. However, we can still make a little bit more progress if we bring higher reciprocity theorems to bear: we’ll handle the cases {n=27} and {n=64}, two examples of numbers which are not convenient.

7.1. Cubic reciprocity

First, we prove that

Theorem 27 (On {p = x^2+27y^2})

A prime {p > 3} is of the form {x^2+27y^2} if and only if {p \equiv 1 \pmod 3} and {2} is a cubic residue modulo {p}.

To do this we use cubic reciprocity, which requires working in the Eisenstein integers {\mathbb Z[\omega]} where {\omega} is a cube root of unity. There are six units in {\mathbb Z[\omega]} (the sixth roots of unity), hence each nonzero number has six associates (differing by a unit), and the ring is in fact a PID.

Now if we let {\pi} be a prime not dividing {3}, and {\alpha} is coprime to {\pi}, then we can define the cubic Legendre symbol by setting

\displaystyle  \left( \frac{\alpha}{\pi} \right)_3 \equiv \alpha^{\frac13(N\pi-1)} \pmod \pi \in \left\{ 1, \omega, \omega^2 \right\}.

Moreover, we can define a primary prime {\pi \nmid 3} to be one such that {\pi \equiv -1 \pmod 3}; given any prime exactly one of the six associates is primary. We then have the following reciprocity theorem:

Theorem 28 (Cubic reciprocity)

If {\pi} and {\theta} are disjoint primary primes in {\mathbb Z[\omega]} then

\displaystyle  \left( \frac{\pi}{\theta} \right)_3 = \left( \frac{\theta}{\pi} \right)_3.

We also have the following supplementary laws: if {\pi = (3m-1) + 3n\omega}, then

\displaystyle  \left( \frac{\omega}{\pi} \right)_3 = \omega^{m+n} \qquad\text{and}\qquad \left( \frac{1-\omega}{\pi} \right)_3 = \omega^{2m}.

The first supplementary law is for the unit (analogous to {(-1/p)}) while the second reciprocity law handles the prime divisors of {3 = -\omega^2(1-\omega)^2} (analogous to {(2/p)}.)

We can tie this back into {\mathbb Z} as follows. If {p \equiv 1 \pmod 3} is a rational prime then it is represented by {x^2+xy+y^2}, and thus we can put {p = \pi \overline{\pi}} for some prime {\pi}, {N(\pi) = p}. Consequently, we have a natural isomorphism

\displaystyle  \mathbb Z[\omega] / \pi \mathbb Z[\omega] \cong \mathbb Z / p \mathbb Z.

Therefore, we see that a given {a \in (\mathbb Z/p\mathbb Z)^\times} is a cubic residue if and only if {(\alpha/\pi)_3 = 1}.

In particular, we have the following corollary, which is all we will need:

Corollary 29 (When {2} is a cubic residue)

Let {p \equiv 1 \pmod 3} be a rational prime, {p > 3}. Write {p = \pi \overline{\pi}} with {\pi} primary. Then {2} is a cubic residue modulo {p} if and only if {\pi \equiv 1 \pmod 2}.

Proof: By cubic reciprocity:

\displaystyle  \left( \frac{2}{\pi} \right)_3 = \left( \frac{\pi}{2} \right)_3 \equiv \pi^{\frac13(N2-1)} \equiv \pi \pmod 2.

\Box

Now we give the proof of Theorem~27. Proof: First assume

\displaystyle  p = x^2+27y^2 = \left( x+3\sqrt 3 y \right)\left( x-3\sqrt 3 y \right).

Let {\pi = x + 3 \sqrt{-3} y = (x+3y) + 6y\omega} be primary, noting that {\pi \equiv 1 \pmod 2}. Now clearly {p \equiv 1 \pmod 3}, so done by corollary.

For the converse, assume {p \equiv 1 \pmod 3}, {p = \pi \overline{\pi}} with {\pi} primary and {\pi \equiv 1 \pmod 2}. If we set {\pi = a + b\omega} for integers {a} and {b}, then the fact that {\pi \equiv 1 \pmod 2} and {\pi \equiv -1 \pmod 3} is enough to imply that {6 \mid b} (check it!). Moreover,

\displaystyle  p = a^2-ab+b^2 = \left( a - \frac{1}{2} b \right)^2 + 27 \left( \frac16b \right)^2

as desired. \Box

7.2. Quartic reciprocity

This time we work in {\mathbb Z[i]}, for which there are four units {\pm 1}, {\pm i}. A prime is primary if {\pi \equiv 1 \pmod{2+2i}}; every prime not dividing {2 = -i(1+i)^2} has a unique associate which is primary. Then we can as before define

\displaystyle  \alpha^{\frac14(N\pi-1)} \equiv \left( \frac{\alpha}{\pi} \right)_4 \pmod{\pi} \in \left\{ \pm 1, \pm i \right\}

where {\pi} is primary, and {\alpha} is nonzero mod {\pi}. As before {p \equiv 1 \pmod 4}, {p = \pi\overline{\pi}} we have that {a} is a quartic residue modulo {p} if and only if {\left( a/\pi \right)_4 = 1} thanks to the isomorphism

\displaystyle  \mathbb Z[i] / \pi \mathbb Z[i] \cong \mathbb Z / p \mathbb Z.

Now we have

Theorem 30 (Quartic reciprocity)

If {\pi} and {\theta} are distinct primary primes in {\mathbb Z[i]} then

\displaystyle  \left( \frac{\theta}{\pi} \right)_4 = \left( \frac{\pi}{\theta} \right)_4 (-1)^{\frac{1}{16}(N\theta-1)(N\pi-1)}.

We also have supplementary laws that state that if {\pi = a+bi} is primary, then

\displaystyle  \left( \frac{i}{\pi} \right)_4 = i^{-\frac{1}{2}(a-1)} \qquad\text{and}\qquad \left( \frac{1+i}{\pi} \right)_4 = i^{\frac14(a-b-b^2-1)}.

Again, the first law handles units, and the second law handles the prime divisors of {2}. The corollary we care about this time in fact uses only the supplemental laws:

Corollary 31 (When {2} is a quartic residue)

Let {p \equiv 1 \pmod 4} be a prime, and put {p = \pi\overline{\pi}} with {\pi = a+bi} primary. Then

\displaystyle  \left( \frac{2}{\pi} \right)_4 = i^{-b/2}

and in particular {2} is a quartic residue modulo {p} if and only if {b \equiv 0 \pmod 8}.

Proof: Note that {2 = i^3(1+i)^2} and applying the above. Therefore

\displaystyle  \left( \frac{2}{\pi} \right)_4 = \left( \frac{i}{\pi} \right)_4^3 \left( \frac{1+i}{\pi} \right)_4^2 = i^{-\frac32(a-1)} \cdot i^{\frac12(a-b-b^2-1)} = i^{-(a-1) - \frac{1}{2} b(b+1)}.

Now we assumed {a+bi} is primary. We claim that

\displaystyle  a - 1 + \frac{1}{2} b^2 \equiv 0 \pmod 4.

Note that since {(a+bi)-1} was is divisible by {2+2i}, hence {N(2+2i)=8} divides {(a-1)^2+b^2}. Thus

\displaystyle  2(a-1) + b^2 \equiv 2(a-1) + (a-1)^2 \equiv (a-1)(a-3) \equiv 0 \pmod 8

since {a} is odd and {b} is even. Finally,

\displaystyle  \left( \frac{2}{\pi} \right)_4 = i^{-(a-1) - \frac{1}{2} b(b+1)} = i^{-\frac{1}{2} b + (a-1+\frac{1}{2} b^2)} \equiv i^{-\frac{1}{2} b} \pmod p.

\Box

From here we quickly deduce

Theorem 32 (On {p = x^2+64y^2})

If {p > 2} is prime, then {p = x^2+64y^2} if and only if {p \equiv 1 \pmod 4} and {2} is a quartic residue modulo {p}.

Some Thoughts on Olympiad Material Design

(This is a bit of a follow-up to the solution reading post last month. Spoiler warnings: USAMO 2014/6, USAMO 2012/2, TSTST 2016/4, and hints for ELMO 2013/1, IMO 2016/2.)

I want to say a little about the process which I use to design my olympiad handouts and classes these days (and thus by extension the way I personally think about problems). The short summary is that my teaching style is centered around showing connections and recurring themes between problems.

Now let me explain this in more detail.

1. Main ideas

Solutions to olympiad problems can look quite different from one another at a surface level, but typically they center around one or two main ideas, as I describe in my post on reading solutions. Because details are easy to work out once you have the main idea, as far as learning is concerned you can more or less throw away the details and pay most of your attention to main ideas.

Thus whenever I solve an olympiad problem, I make a deliberate effort to summarize the solution in a few sentences, such that I basically know how to do it from there. I also make a deliberate effort, whenever I write up a solution in my notes, to structure it so that my future self can see all the key ideas at a glance and thus be able to understand the general path of the solution immediately.

The example I’ve previously mentioned is USAMO 2014/6.

Example 1 (USAMO 2014, Gabriel Dospinescu)

Prove that there is a constant {c>0} with the following property: If {a, b, n} are positive integers such that {\gcd(a+i, b+j)>1} for all {i, j \in \{0, 1, \dots, n\}}, then

\displaystyle  \min\{a, b\}> (cn)^n.

If you look at any complete solution to the problem, you will see a lot of technical estimates involving {\zeta(2)} and the like. But the main idea is very simple: “consider an {N \times N} table of primes and note the small primes cannot adequately cover the board, since {\sum p^{-2} < \frac{1}{2}}”. Once you have this main idea the technical estimates are just the grunt work that you force yourself to do if you’re a contestant (and don’t do if you’re retired like me).

Thus the study of olympiad problems is reduced to the study of main ideas behind these problems.

2. Taxonomy

So how do we come up with the main ideas? Of course I won’t be able to answer this question completely, because therein lies most of the difficulty of olympiads.

But I do have some progress in this way. It comes down to seeing how main ideas are similar to each other. I spend a lot of time trying to classify the main ideas into categories or themes, based on how similar they feel to one another. If I see one theme pop up over and over, then I can make it into a class.

I think olympiad taxonomy is severely underrated, and generally not done correctly. The status quo is that people do bucket sorts based on the particular technical details which are present in the problem. This is correlated with the main ideas, but the two do not always coincide.

An example where technical sort works okay is Euclidean geometry. Here is a simple example: harmonic bundles in projective geometry. As I explain in my book, there are a few “basic” configurations involved:

  • Midpoints and parallel lines
  • The Ceva / Menelaus configuration
  • Harmonic quadrilateral / symmedian configuration
  • Apollonian circle (right angle and bisectors)

(For a reference, see Lemmas 2, 4, 5 and Exercise 0 here.) Thus from experience, any time I see one of these pictures inside the current diagram, I think to myself that “this problem feels projective”; and if there is a way to do so I try to use harmonic bundles on it.

An example where technical sort fails is the “pigeonhole principle”. A typical problem in such a class looks something like USAMO 2012/2.

Example 2 (USAMO 2012, Gregory Galperin)

A circle is divided into congruent arcs by {432} points. The points are colored in four colors such that some {108} points are colored Red, some {108} points are colored Green, some {108} points are colored Blue, and the remaining {108} points are colored Yellow. Prove that one can choose three points of each color in such a way that the four triangles formed by the chosen points of the same color are congruent.

It’s true that the official solution uses the words “pigeonhole principle” but that is not really the heart of the matter; the key idea is that you consider all possible rotations and count the number of incidences. (In any case, such calculations are better done using expected value anyways.)

Now why is taxonomy a good thing for learning and teaching? The reason is that building connections and seeing similarities is most easily done by simultaneously presenting several related problems. I’ve actually mentioned this already in a different blog post, but let me give the demonstration again.

Suppose I wrote down the following:

\displaystyle  \begin{array}{lll} A1 & B11 & C8 \\ A9 & B44 & C27 \\ A49 & B33 & C343 \\ A16 & B99 & C1 \\ A25 & B22 & C125 \end{array}

You can tell what each of the {A}‘s, {B}‘s, {C}‘s have in common by looking for a few moments. But what happens if I intertwine them?

\displaystyle  \begin{array}{lllll} B11 & C27 & C343 & A1 & A9 \\ C125 & B33 & A49 & B44 & A25 \\ A16 & B99 & B22 & C8 & C1 \end{array}

This is the same information, but now you have to work much harder to notice the association between the letters and the numbers they’re next to.

This is why, if you are an olympiad student, I strongly encourage you to keep a journal or blog of the problems you’ve done. Solving olympiad problems takes lots of time and so it’s worth it to spend at least a few minutes jotting down the main ideas. And once you have enough of these, you can start to see new connections between problems you haven’t seen before, rather than being confined to thinking about individual problems in isolation. (Additionally, it means you will never have redo problems to which you forgot the solution — learn from my mistake here.)

3. Ten buckets of geometry

I want to elaborate more on geometry in general. These days, if I see a solution to a Euclidean geometry problem, then I mentally store the problem and solution into one (or more) buckets. I can even tell you what my buckets are:

  1. Direct angle chasing
  2. Power of a point / radical axis
  3. Homothety, similar triangles, ratios
  4. Recognizing some standard configuration (see Yufei for a list)
  5. Doing some length calculations
  6. Complex numbers
  7. Barycentric coordinates
  8. Inversion
  9. Harmonic bundles or pole/polar and homography
  10. Spiral similarity, Miquel points

which my dedicated fans probably recognize as the ten chapters of my textbook. (Problems may also fall in more than one bucket if for example they are difficult and require multiple key ideas, or if there are multiple solutions.)

Now whenever I see a new geometry problem, the diagram will often “feel” similar to problems in a certain bucket. Exactly what I mean by “feel” is hard to formalize — it’s a certain gut feeling that you pick up by doing enough examples. There are some things you can say, such as “problems which feature a central circle and feet of altitudes tend to fall in bucket 6”, or “problems which only involve incidence always fall in bucket 9”. But it seems hard to come up with an exhaustive list of hard rules that will do better than human intuition.

4. How do problems feel?

But as I said in my post on reading solutions, there are deeper lessons to teach than just technical details.

For examples of themes on opposite ends of the spectrum, let’s move on to combinatorics. Geometry is quite structured and so the themes in the main ideas tend to translate to specific theorems used in the solution. Combinatorics is much less structured and many of the themes I use in combinatorics cannot really be formalized. (Consequently, since everyone else seems to mostly teach technical themes, several of the combinatorics themes I teach are idiosyncratic, and to my knowledge are not taught by anyone else.)

For example, one of the unusual themes I teach is called Global. It’s about the idea that to solve a problem, you can just kind of “add up everything at once”, for example using linearity of expectation, or by double-counting, or whatever. In particular these kinds of approach ignore the “local” details of the problem. It’s hard to make this precise, so I’ll just give two recent examples.

Example 3 (ELMO 2013, Ray Li)

Let {a_1,a_2,\dots,a_9} be nine real numbers, not necessarily distinct, with average {m}. Let {A} denote the number of triples {1 \le i < j < k \le 9} for which {a_i + a_j + a_k \ge 3m}. What is the minimum possible value of {A}?

Example 4 (IMO 2016)

Find all integers {n} for which each cell of {n \times n} table can be filled with one of the letters {I}, {M} and {O} in such a way that:

  • In each row and column, one third of the entries are {I}, one third are {M} and one third are {O}; and
  • in any diagonal, if the number of entries on the diagonal is a multiple of three, then one third of the entries are {I}, one third are {M} and one third are {O}.

If you look at the solutions to these problems, they have the same “feeling” of adding everything up, even though the specific techniques are somewhat different (double-counting for the former, diagonals modulo {3} for the latter). Nonetheless, my experience with problems similar to the former was immensely helpful for the latter, and it’s why I was able to solve the IMO problem.

5. Gaps

This perspective also explains why I’m relatively bad at functional equations. There are some things I can say that may be useful (see my handouts), but much of the time these are just technical tricks. (When sorting functional equations in my head, I have a bucket called “standard fare” meaning that you “just do work”; as far I can tell this bucket is pretty useless.) I always feel stupid teaching functional equations, because I never have many good insights to say.

Part of the reason is that functional equations often don’t have a main idea at all. Consequently it’s hard for me to do useful taxonomy on them.

Then sometimes you run into something like the windmill problem, the solution of which is fairly “novel”, not being similar to problems that come up in training. I have yet to figure out a good way to train students to be able to solve windmill-like problems.

6. Surprise

I’ll close by mentioning one common way I come up with a theme.

Sometimes I will run across an olympiad problem {P} which I solve quickly, and think should be very easy, and yet once I start grading {P} I find that the scores are much lower than I expected. Since the way I solve problems is by drawing experience from similar previous problems, this must mean that I’ve subconsciously found a general framework to solve problems like {P}, which is not obvious to my students yet. So if I can put my finger on what that framework is, then I have something new to say.

The most recent example I can think of when this happened was TSTST 2016/4 which was given last June (and was also a very elegant problem, at least in my opinion).

Example 5 (TSTST 2016, Linus Hamilton)

Let {n > 1} be a positive integers. Prove that we must apply the Euler {\varphi} function at least {\log_3 n} times before reaching {1}.

I solved this problem very quickly when we were drafting the TSTST exam, figuring out the solution while walking to dinner. So I was quite surprised when I looked at the scores for the problem and found out that empirically it was not that easy.

After I thought about this, I have a new tentative idea. You see, when doing this problem I really was thinking about “what does this {\varphi} operation do?”. You can think of {n} as an infinite tuple

\displaystyle  \left(\nu_2(n), \nu_3(n), \nu_5(n), \nu_7(n), \dots \right)

of prime exponents. Then the {\varphi} can be thought of as an operation which takes each nonzero component, decreases it by one, and then adds some particular vector back. For example, if {\nu_7(n) > 0} then {\nu_7} is decreased by one and each of {\nu_2(n)} and {\nu_3(n)} are increased by one. In any case, if you look at this behavior for long enough you will see that the {\nu_2} coordinate is a natural way to “track time” in successive {\varphi} operations; once you figure this out, getting the bound of {\log_3 n} is quite natural. (Details left as exercise to reader.)

Now when I read through the solutions, I found that many of them had not really tried to think of the problem in such a “structured” way, and had tried to directly solve it by for example trying to prove {\varphi(n) \ge n/3} (which is false) or something similar to this. I realized that had the students just ignored the task “prove {n \le 3^k}” and spent some time getting a better understanding of the {\varphi} structure, they would have had a much better chance at solving the problem. Why had I known that structural thinking would be helpful? I couldn’t quite explain it, but it had something to do with the fact that the “main object” of the question was “set in stone”; there was no “degrees of freedom” in it, and it was concrete enough that I felt like I could understand it. Once I understood how multiple {\varphi} operations behaved, the bit about {\log_3 n} almost served as an “answer extraction” mechanism.

These thoughts led to the recent development of a class which I named Rigid, which is all about problems where the point is not to immediately try to prove what the question asks for, but to first step back and understand completely how a particular rigid structure (like the {\varphi} in this problem) behaves, and to then solve the problem using this understanding.

On Reading Solutions

(Ed Note: This was earlier posted under the incorrect title “On Designing Olympiad Training”. How I managed to mess that up is a long story involving some incompetence with Python scripts, but this is fixed now.)

Spoiler warnings: USAMO 2014/1, and hints for Putnam 2014 A4 and B2. You may want to work on these problems yourself before reading this post.

1. An Apology

At last year’s USA IMO training camp, I prepared a handout on writing/style for the students at MOP. One of the things I talked about was the “ocean-crossing point”, which for our purposes you can think of as the discrete jump from a problem being “essentially not solved” ({0+}) to “essentially solved” ({7-}). The name comes from a Scott Aaronson post:

Suppose your friend in Boston blindfolded you, drove you around for twenty minutes, then took the blindfold off and claimed you were now in Beijing. Yes, you do see Chinese signs and pagoda roofs, and no, you can’t immediately disprove him — but based on your knowledge of both cars and geography, isn’t it more likely you’re just in Chinatown? . . . We start in Boston, we end up in Beijing, and at no point is anything resembling an ocean ever crossed.

I then gave two examples of how to write a solution to the following example problem.

Problem 1 (USAMO 2014)

Let {a}, {b}, {c}, {d} be real numbers such that {b-d \ge 5} and all zeros {x_1}, {x_2}, {x_3}, and {x_4} of the polynomial {P(x)=x^4+ax^3+bx^2+cx+d} are real. Find the smallest value the product

\displaystyle  (x_1^2+1)(x_2^2+1)(x_3^2+1)(x_4^2+1)

can take.

Proof: (Not-so-good write-up) Since {x_j^2+1 = (x+i)(x-i)} for every {j=1,2,3,4} (where {i=\sqrt{-1}}), we get {\prod_{j=1}^4 (x_j^2+1) = \prod_{j=1}^4 (x_j+i)(x_j-i) = P(i)P(-i)} which equals to {|P(i)|^2 = (b-d-1)^2 + (a-c)^2}. If {x_1 = x_2 = x_3 = x_4 = 1} this is {16} and {b-d = 5}. Also, {b-d \ge 5}, this is {\ge 16}. \Box

Proof: (Better write-up) The answer is {16}. This can be achieved by taking {x_1 = x_2 = x_3 = x_4 = 1}, whence the product is {2^4 = 16}, and {b-d = 5}.

Now, we prove this is a lower bound. Let {i = \sqrt{-1}}. The key observation is that

\displaystyle  \prod_{j=1}^4 \left( x_j^2 + 1 \right) 		= \prod_{j=1}^4 (x_j - i)(x_j + i) 		= P(i)P(-i).

Consequently, we have

\displaystyle  \begin{aligned} 		\left( x_1^2 + 1 \right) 		\left( x_2^2 + 1 \right) 		\left( x_3^2 + 1 \right) 		\left( x_1^2 + 1 \right) 		&= (b-d-1)^2 + (a-c)^2 \\ 		&\ge (5-1)^2 + 0^2 = 16. 	\end{aligned}

This proves the lower bound. \Box

You’ll notice that it’s much easier to see the key idea in the second solution: namely,

\displaystyle  \prod_j (x_j^2+1) = P(i)P(-i) = (b-d-1)^2 + (a-c)^2

which allows you use the enigmatic condition {b-d \ge 5}.

Unfortunately I have the following confession to make:

In practice, most solutions are written more like the first one than the second one.

The truth is that writing up solutions is sort of a chore that people never really want to do but have to — much like washing dishes. So must solutions won’t be written in a way that helps you learn from them. This means that when you read solutions, you should assume that the thing you really want (i.e., the ocean-crossing point) is buried somewhere amidst a haystack of other unimportant details.

2. Diff

But in practice even the “better write-up” I mentioned above still has too much information in it.

Suppose you were explaining how to solve this problem to a friend. You would probably not start your explanation by saying that the minimum is {16}, achieved by {x_1 = x_2 = x_3 = x_4 = 1} — even though this is indeed a logically necessary part of the solution. Instead, the first thing you would probably tell them is to notice that

\displaystyle  \prod_{j=1}^4 \left( x_j^2 + 1 \right) = P(i)P(-i) 	= (b-d-1)^2 + (a-c)^2 \ge 4^2 = 16.

In fact, if your friend has been working on the problem for more than ten minutes, this is probably the only thing you need to tell them. They probably already figured out by themselves that there was a good chance the answer would be {2^4 = 16}, just based on the condition {b-d \ge 5}. This “one-liner” is all that they need to finish the problem. You don’t need to spell out to them the rest of the details.

When you explain a problem to a friend in this way, you’re communicating just the difference: the one or two sentences such that your friend could work out the rest of the details themselves with these directions. When reading the solution yourself, you should try to extract the main idea in the same way. Olympiad problems generally have only a few main ideas in them, from which the rest of the details can be derived. So reading the solution should feel much like searching for a needle in a haystack.

3. Don’t Read Line by Line

In particular: you should rarely read most of the words in the solution, and you should almost never read every word of the solution.

Whenever I read solutions to problems I didn’t solve, I often read less than 10% of the words in the solution. Instead I search aggressively for the one or two sentences which tell me the key step that I couldn’t find myself. (Functional equations are the glaring exception to this rule, since in these problems there sometimes isn’t any main idea other than “stumble around randomly”, and the steps really are all about equally important. But this is rarer than you might guess.)

I think a common mistake students make is to treat the solution as a sequence of logical steps: that is, reading the solution line by line, and then verifying that each line follows from the previous ones. This seems to entirely miss the point, because not all lines are created equal, and most lines can be easily derived once you figure out the main idea.

If you find that the only way that you can understand the solution is reading it step by step, then the problem may simply be too hard for you. This is because what counts as “details” and “main ideas” are relative to the absolute difficulty of the problem. Here’s an example of what I mean: the solution to a USAMO 3/6 level geometry problem, call it {P}, might look as follows.

Proof: First, we prove lemma {L_1}. (Proof of {L_1}, which is USAMO 1/4 level.)

Then, we prove lemma {L_2}. (Proof of {L_2}, which is USAMO 1/4 level.)

Finally, we remark that putting together {L_1} and {L_2} solves the problem. \Box

Likely the main difficulty of {P} is actually finding {L_1} and {L_2}. So a very experienced student might think of the sub-proofs {L_i} as “easy details”. But younger students might find {L_i} challenging in their own right, and be unable to solve the problem even after being told what the lemmas are: which is why it is hard for them to tell that {\{L_1, L_2\}} were the main ideas to begin with. In that case, the problem {P} is probably way over their head.

This is also why it doesn’t make sense to read solutions to problems which you have not worked on at all — there are often details, natural steps and notation, et cetera which are obvious to you if and only if you have actually tried the problem for a little while yourself.

4. Reflection

The earlier sections describe how to extract the main idea of an olympiad solution. This is neat because instead of having to remember an entire solution, you only need to remember a few sentences now, and it gives you a good understanding of the solution at hand.

But this still isn’t achieving your ultimate goal in learning: you are trying to maximize your scores on future problems. Unless you are extremely fortunate, you will probably never see the exact same problem on an exam again.

So one question you should often ask is:

“How could I have thought of that?”

(Or in my case, “how could I train a student to think of this?”.)

There are probably some surface-level skills that you can pick out of this. The lowest hanging fruit is things that are technical. A small number of examples, with varying amounts of depth:

  • This problem is “purely projective”, so we can take a projective transformation!
  • This problem had a segment {AB} with midpoint {M}, and a line {\ell} parallel to {AB}, so I should consider projecting {(AB;M\infty)} through a point on {\ell}.
  • Drawing a grid of primes is the only real idea in this problem, and the rest of it is just calculations.
  • This main claim is easy to guess since in some small cases, the frogs have “violating points” in a large circle.
  • In this problem there are {n} numbers on a circle, {n} odd. The counterexamples for {n} even alternate up and down, which motivates proving that no three consecutive numbers are in sorted order.
  • This is a juggling problem!

(Brownie points if any contest enthusiasts can figure out which problems I’m talking about in this list!)

5. Learn Philosophy, not Formalism

But now I want to point out that the best answers to the above question are often not formalizable. Lists of triggers and actions are “cheap forms of understanding”, because going through a list of methods will only get so far.

On the other hand, the un-formalizable philosophy that you can extract from reading a question, is part of that legendary “intuition” that people are always talking about: you can’t describe it in words, but it’s certainly there. Maybe I would even be better if I reframed the question as:

“What does this problem feel like?”

So let’s talk about our feelings. Here is David Yang’s take on it:

Whenever you see a problem you really like, store it (and the solution) in your mind like a cherished memory . . . The point of this is that you will see problems which will remind you of that problem despite having no obvious relation. You will not be able to say concretely what the relation is, but think a lot about it and give a name to the common aspect of the two problems. Eventually, you will see new problems for which you feel like could also be described by that name.

Do this enough, and you will have a very powerful intuition that cannot be described easily concretely (and in particular, that nobody else will have).

This itself doesn’t make sense without an example, so here is an example of one philosophy I’ve developed. Here are two problems on Putnam 2014:

Problem 2 (Putnam 2014 A4)

Suppose {X} is a random variable that takes on only nonnegative integer values, with {\mathbb E[X] = 1}, {\mathbb E[X^2] = 2}, and {\mathbb E[X^3] = 5}. Determine the smallest possible value of the probability of the event {X=0}.

Problem 3 (Putnam 2014 B2)

Suppose that {f} is a function on the interval {[1,3]} such that {-1\le f(x)\le 1} for all {x} and

\displaystyle  \int_1^3 f(x) \; dx=0.

How large can {\int_1^3 \frac{f(x)}{x} \; dx} be?

At a glance there seems to be nearly no connection between these problems. One of them is a combinatorics/algebra question, and the other is an integral. Moreover, if you read the official solutions or even my own write-ups, you will find very little in common joining them.

Yet it turns out that these two problems do have something in common to me, which I’ll try to describe below. My thought process in solving either question went as follows:

In both problems, I was able to quickly make a good guess as to what the optimal {X}/{f} was, and then come up with a heuristic explanation (not a proof) why that guess had to be correct, namely, “by smoothing, you should put all the weight on the left”. Let me call this optimal argument {A}.

That conjectured {A} gave a numerical answer to the actual problem: but for both of these problems, it turns out that numerical answer is completely uninteresting, as are the exact details of {A}. It should be philosophically be interpreted as “this is the number that happens to pop out when you plug in the optimal choice”. And indeed that’s what both solutions feel like. These solutions don’t actually care what the exact values of {A} are, they only care about the properties that made me think they were optimal in the first place.

I gave this philosophy the name Equality, with poster description “problems where looking at the equality case is important”. This text description feels more or less useless to me; I suppose it’s the thought that counts. But ever since I came up with this name, it has helped me solve new problems that come up, because they would give me the same feeling that these two problems did.

Two more examples of these themes that I’ve come up with are Global and Rigid, which will be described in a future post on how I design training materials.

Holomorphic Logarithms and Roots

In this post we’ll make sense of a holomorphic square root and logarithm. Wrote this up because I was surprised how hard it was to find a decent complete explanation.

Let {f : U \rightarrow \mathbb C} be a holomorphic function. A holomorphic {n}th root of {f} is a function {g : U \rightarrow \mathbb C} such that {f(z) = g(z)^n} for all {z \in U}. A logarithm of {f} is a function {g : U \rightarrow \mathbb C} such that {f(z) = e^{g(z)}} for all {z \in U}. The main question we’ll try to figure out is: when do these exist? In particular, what if {f = \mathrm{id}}?

1. Motivation: Square Root of a Complex Number

To start us off, can we define {\sqrt z} for any complex number {z}?

The first obvious problem that comes up is that there for any {z}, there are two numbers {w} such that {w^2 = z}. How can we pick one to use? For our ordinary square root function, we had a notion of “positive”, and so we simply took the positive root.

Let’s expand on this: given { z = r \left( \cos\theta + i \sin\theta \right) } (here {r \ge 0}) we should take the root to be

\displaystyle w = \sqrt{r} \left( \cos \alpha + i \sin \alpha \right).

such that {2\alpha \equiv \theta \pmod{2\pi}}; there are two choices for {\alpha \pmod{2\pi}}, differing by {\pi}.

For complex numbers, we don’t have an obvious way to pick {\alpha}. Nonetheless, perhaps we can also get away with an arbitrary distinction: let’s see what happens if we just choose the {\alpha} with {-\frac{1}{2}\pi < \alpha \le \frac{1}{2}\pi}.

Pictured below are some points (in red) and their images (in blue) under this “upper-half” square root. The condition on {\alpha} means we are forcing the blue points to lie on the right-half plane.

holomorphic-log-1

Here, {w_i^2 = z_i} for each {i}, and we are constraining the {w_i} to lie in the right half of the complex plane. We see there is an obvious issue: there is a big discontinuity near the point {z_5} and {z_7}! The nearby point {w_6} has been mapped very far away. This discontinuity occurs since the points on the negative real axis are at the “boundary”. For example, given {-4}, we send it to {-2i}, but we have hit the boundary: in our interval {-\frac{1}{2}\pi \le \alpha < \frac{1}{2}\pi}, we are at the very left edge.

The negative real axis that we must not touch is is what we will later call a branch cut, but for now I call it a ray of death. It is a warning to the red points: if you cross this line, you will die! However, if we move the red circle just a little upwards (so that it misses the negative real axis) this issue is avoided entirely, and we get what seems to be a “nice” square root.

holomorphic-log-2

In fact, the ray of death is fairly arbitrary: it is the set of “boundary issues” that arose when we picked {-\frac{1}{2}\pi < \alpha \le \frac{1}{2}\pi}. Suppose we instead insisted on the interval {0 \le \alpha < \pi}; then the ray of death would be the positive real axis instead. The earlier circle we had now works just fine.

holomorphic-log-3

What we see is that picking a particular {\alpha}-interval leads to a different set of edge cases, and hence a different ray of death. The only thing these rays have in common is their starting point of zero. In other words, given a red circle and a restriction of {\alpha}, I can make a nice “square rooted” blue circle as long as the ray of death misses it.

So, what exactly is going on?

2. Square Roots of Holomorphic Functions

To get a picture of what’s happening, we would like to consider a more general problem: let {f: U \rightarrow \mathbb C} be holomorphic. Then we want to decide whether there is a {g : U \rightarrow \mathbb C} such that

\displaystyle f(z) = g(z)^2.

Our previous discussion when {f = \mathrm{id}} tells us we cannot hope to achieve this for {U = \mathbb C}; there is a “half-ray” which causes problems. However, there are certainly functions {f : \mathbb C \rightarrow \mathbb C} such that a {g} exists. As a simplest example, {f(z) = z^2} should definitely have a square root!

Now let’s see if we can fudge together a square root. Earlier, what we did was try to specify a rule to force one of the two choices at each point. This is unnecessarily strict. Perhaps we can do something like the following: start at a point in {z_0 \in U}, pick a square root {w_0} of {f(z_0)}, and then try to “fudge” from there the square roots of the other points. What do I mean by fudge? Well, suppose {z_1} is a point very close to {z_0}, and we want to pick a square root {w_1} of {f(z_1)}. While there are two choices, we also would expect {w_0} to be close to {w_1}. Unless we are highly unlucky, this should tells us which choice of {w_1} to pick. (Stupid concrete example: if I have taken the square root {-4.12i} of {-17} and then ask you to continue this square root to {-16}, which sign should you pick for {\pm 4i}?)

There are two possible ways we could get unlucky in the scheme above: first, if {w_0 = 0}, then we’re sunk. But even if we avoid that, we have to worry that we are in a situation, where we run around a full loop in the complex plane, and then find that our continuous perturbation has left us in a different place than we started. For concreteness, consider the following situation, again with {f = \mathrm{id}}:

holomorphic-log-4

We started at the point {z_0}, with one of its square roots as {w_0}. We then wound a full red circle around the origin, only to find that at the end of it, the blue arc is at a different place where it started!

The interval construction from earlier doesn’t work either: no matter how we pick the interval for {\alpha}, any ray of death must hit our red circle. The problem somehow lies with the fact that we have enclosed the very special point {0}.

Nevertheless, we know that if we take {f(z) = z^2}, then we don’t run into any problems with our “make it up as you go” procedure. So, what exactly is going on?

3. Covering Projections

By now, if you have read the part of algebraic topology. this should all seem very strangely familiar. The “fudging” procedure exactly describes the idea of a lifting.

More precisely, recall that there is a covering projection

\displaystyle (-)^2 : \mathbb C \setminus \{0\} \rightarrow \mathbb C \setminus \{0\}.

Let {V = \left\{ z \in U \mid f(z) \neq 0 \right\}}. For {z \in U \setminus V}, we already have the square root {g(z) = \sqrt{f(z)} = \sqrt 0 = 0}. So the burden is completing {g : V \rightarrow \mathbb C}.

Then essentially, what we are trying to do is construct a lifting {g} for the following diagram: cproj-squareOur map {p} can be described as “winding around twice”. From algebraic topology, we now know that this lifting exists if and only if

\displaystyle f_\ast``(\pi_1(V)) \subseteq p_\ast``(\pi_1(E))

is a subset of the image of {\pi_1(E)} by {p}. Since {B} and {E} are both punctured planes, we can identify them with {S^1}.

Ques 1

Show that the image under {p} is exactly {2\mathbb Z} once we identify {\pi_1(B) = \mathbb Z}.

That means that for any loop {\gamma} in {V}, we need {f \circ \gamma} to have an even winding number around {0 \in B}. This amounts to

\displaystyle \frac{1}{2\pi} \oint_\gamma \frac{f'}{f} \; dz \in 2\mathbb Z

since {f} has no poles.

Replacing {2} with {n} and carrying over the discussion gives the first main result.

Theorem 2 (Existence of Holomorphic {n}th Roots)

Let {f : U \rightarrow \mathbb C} be holomorphic. Then {f} has a holomorphic {n}th root if and only if

\displaystyle \frac{1}{2\pi i}\oint_\gamma \frac{f'}{f} \; dz \in n\mathbb Z

for every contour {\gamma} in {U}.

4. Complex Logarithms

The multivalued nature of the complex logarithm comes from the fact that

\displaystyle \exp(z+2\pi i) = \exp(z).

So if {e^w = z}, then any complex number {w + 2\pi i k} is also a solution.

We can handle this in the same way as before: it amounts to a lifting of the following diagram. cproj-expThere is no longer a need to work with a separate {V} since:

Ques 3

Show that if {f} has any zeros then {g} possibly can’t exist.

In fact, the map {\exp : \mathbb C \rightarrow \mathbb C\setminus\{0\}} is a universal cover, since {\mathbb C} is simply connected. Thus, {p``(\pi_1(\mathbb C))} is trivial. So in addition to being zero-free, {f} cannot have any winding number around {0 \in B} at all. In other words:

Theorem 4 (Existence of Logarithms)

Let {f : U \rightarrow \mathbb C} be holomorphic. Then {f} has a logarithm if and only if

\displaystyle \frac{1}{2\pi i}\oint_\gamma \frac{f'}{f} \; dz = 0

for every contour {\gamma} in {U}.

5. Some Special Cases

The most common special case is

Corollary 5 (Nonvanishing Functions from Simply Connected Domains)

Let {f : \Omega \rightarrow \mathbb C} be continuous, where {\Omega} is simply connected. If {f(z) \neq 0} for every {z \in \Omega}, then {f} has both a logarithm and holomorphic {n}th root.

Finally, let’s return to the question of {f = \mathrm{id}} from the very beginning. What’s the best domain {U} such that we can define {\sqrt{-} : U \rightarrow \mathbb C}? Clearly {U = \mathbb C} cannot be made to work, but we can do almost as well. For note that the only zero of {f = \mathrm{id}} is at the origin. Thus if we want to make a logarithm exist, all we have to do is make an incision in the complex plane that renders it impossible to make a loop around the origin. The usual choice is to delete negative half of the real axis, our very first ray of death; we call this a branch cut, with branch point at {0 \in \mathbb C} (the point which we cannot circle around). This gives

Theorem 6 (Branch Cut Functions)

There exist holomorphic functions

\displaystyle \begin{aligned} \log &: \mathbb C \setminus (-\infty, 0] \rightarrow \mathbb C \\ \sqrt[n]{-} &: \mathbb C \setminus (-\infty, 0] \rightarrow \mathbb C \end{aligned}

satisfying the obvious properties.

There are many possible choices of such functions ({n} choices for the {n}th root and infinitely many for {\log}); a choice of such a function is called a branch. So this is what is meant by a “branch” of a logarithm.

The principal branch is the “canonical” branch, analogous to the way we arbitrarily pick the positive branch to define {\sqrt{-} : \mathbb R_{\ge 0} \rightarrow \mathbb R_{\ge 0}}. For {\log}, we take the {w} such that {e^w = z} and the imaginary part of {w} lies in {(-\pi, \pi]} (since we can shift by integer multiples of {2\pi i}). Often, authors will write {\text{Log } z} to emphasize this choice.

Example 7

Let {U} be the complex plane minus the real interval {[0,1]}. Then the function {U \rightarrow \mathbb C} by {z \mapsto z(z-1)} has a holomorphic square root.

Corollary 8

A holomorphic function {f : U \rightarrow \mathbb C} has a holomorphic {n}th root for all {n \ge 1} if and only if it has a holomorphic logarithm.

Facts about Lie Groups and Algebras

In Spring 2016 I was taking 18.757 Representations of Lie Algebras. Since I knew next to nothing about either Lie groups or algebras, I was forced to quickly learn about their basic facts and properties. These are the notes that I wrote up accordingly. Proofs of most of these facts can be found in standard textbooks, for example Kirillov.

1. Lie groups

Let {K = \mathbb R} or {K = \mathbb C}, depending on taste.

Definition 1

A Lie group is a group {G} which is also a {K}-manifold; the multiplication maps {G \times G \rightarrow G} (by {(g_1, g_2) \mapsto g_1g_2}) and the inversion map {G \rightarrow G} (by {g \mapsto g^{-1}}) are required to be smooth.

A morphism of Lie groups is a map which is both a map of manifolds and a group homomorphism.

Throughout, we will let {e \in G} denote the identity, or {e_G} if we need further emphasis.

Note that in particular, every group {G} can be made into a Lie group by endowing it with the discrete topology. This is silly, so we usually require only focus on connected groups:

Proposition 2 (Reduction to connected Lie groups)

Let {G} be a Lie group and {G^0} the connected component of {G} which contains {e}. Then {G^0} is a normal subgroup, itself a Lie group, and the quotient {G/G^0} has the discrete topology.

In fact, we can also reduce this to the study of simply connected Lie groups as follows.

Proposition 3 (Reduction to simply connected Lie groups)

If {G} is connected, let {\pi : \widetilde G \rightarrow G} be its universal cover. Then {\widetilde G} is a Lie group, {\pi} is a morphism of Lie groups, and {\ker \pi \cong \pi_1(G)}.

Here are some examples of Lie groups.

Example 4 (Examples of Lie groups)

  • {\mathbb R} under addition is a real one-dimensional Lie group.
  • {\mathbb C} under addition is a complex one-dimensional Lie group (and a two-dimensional real Lie group)!
  • The unit circle {S^1 \subseteq \mathbb C} is a real Lie group under multiplication.
  • {\text{GL }(n, K) \subset K^{\oplus n^2}} is a Lie group of dimension {n^2}. This example becomes important for representation theory: a representation of a Lie group {G} is a morphism of Lie groups {G \rightarrow \text{GL }(n, K)}.
  • {\text{SL }(n, K) \subset \text{GL }(n, K)} is a Lie group of dimension {n^2-1}.

As geometric objects, Lie groups {G} enjoy a huge amount of symmetry. For example, any neighborhood {U} of {e} can be “copied over” to any other point {g \in G} by the natural map {gU}. There is another theorem worth noting, which is that:

Proposition 5

If {G} is a connected Lie group and {U} is a neighborhood of the identity {e \in G}, then {U} generates {G} as a group.

2. Haar measure

Recall the following result and its proof from representation theory:

Claim 6

For any finite group {G}, {\mathbb C[G]} is semisimple; all finite-dimensional representations decompose into irreducibles.

Proof: Take a representation {V} and equip it with an arbitrary inner form {\left< -,-\right>_0}. Then we can average it to obtain a new inner form

\displaystyle \left< v, w \right> = \frac{1}{|G|} \sum_{g \in G} \left< gv, gw \right>_0.

which is {G}-invariant. Thus given a subrepresentation {W \subseteq V} we can just take its orthogonal complement to decompose {V}. \Box
We would like to repeat this type of proof with Lie groups. In this case the notion {\sum_{g \in G}} doesn’t make sense, so we want to replace it with an integral {\int_{g \in G}} instead. In order to do this we use the following:

Theorem 7 (Haar measure)

Let {G} be a Lie group. Then there exists a unique Radon measure {\mu} (up to scaling) on {G} which is left-invariant, meaning

\displaystyle \mu(g \cdot S) = \mu(S)

for any Borel subset {S \subseteq G} and “translate” {g \in G}. This measure is called the (left) Haar measure.

Example 8 (Examples of Haar measures)

  • The Haar measure on {(\mathbb R, +)} is the standard Lebesgue measure which assigns {1} to the closed interval {[0,1]}. Of course for any {S}, {\mu(a+S) = \mu(S)} for {a \in \mathbb R}.
  • The Haar measure on {(\mathbb R \setminus \{0\}, \times)} is given by

    \displaystyle \mu(S) = \int_S \frac{1}{|t|} \; dt.

    In particular, {\mu([a,b]) = \log(b/a)}. One sees the invariance under multiplication of these intervals.

  • Let {G = \text{GL }(n, \mathbb R)}. Then a Haar measure is given by

    \displaystyle \mu(S) = \int_S |\det(X)|^{-n} \; dX.

  • For the circle group {S^1}, consider {S \subseteq S^1}. We can define

    \displaystyle \mu(S) = \frac{1}{2\pi} \int_S d\varphi

    across complex arguments {\varphi}. The normalization factor of {2\pi} ensures {\mu(S^1) = 1}.

Note that we have:

Corollary 9

If the Lie group {G} is compact, there is a unique Haar measure with {\mu(G) = 1}.

This follows by just noting that if {\mu} is Radon measure on {X}, then {\mu(X) < \infty}. This now lets us deduce that

Corollary 10 (Compact Lie groups are semisimple)

{\mathbb C[G]} is semisimple for any compact Lie group {G}.

Indeed, we can now consider

\displaystyle \left< v,w\right> = \int_G \left< g \cdot v, g \cdot w\right>_0 \; dg

as we described at the beginning.

3. The tangent space at the identity

In light of the previous comment about neighborhoods of {e} generating {G}, we see that to get some information about the entire Lie group it actually suffices to just get “local” information of {G} at the point {e} (this is one formalization of the fact that Lie groups are super symmetric).

To do this one idea is to look at the tangent space. Let {G} be an {n}-dimensional Lie group (over {K}) and consider {\mathfrak g = T_eG} the tangent space to {G} at the identity {e \in G}. Naturally, this is a {K}-vector space of dimension {n}. We call it the Lie algebra associated to {G}.

Example 11 (Lie algebras corresponding to Lie groups)

  • {(\mathbb R, +)} has a real Lie algebra isomorphic to {\mathbb R}.
  • {(\mathbb C, +)} has a complex Lie algebra isomorphic to {\mathbb C}.
  • The unit circle {S^1 \subseteq \mathbb C} has a real Lie algebra isomorphic to {\mathbb R}, which we think of as the “tangent line” at the point {1 \in S^1}.

Example 12 ({\mathfrak{gl}(n, K)})

Let’s consider {\text{GL }(n, K) \subset K^{\oplus n^2}}, an open subset of {K^{\oplus n^2}}. Its tangent space should just be an {n^2}-dimensional {K}-vector space. By identifying the components in the obvious way, we can think of this Lie algebra as just the set of all {n \times n} matrices.

This Lie algebra goes by the notation {\mathfrak{gl}(n, K)}.

Example 13 ({\mathfrak{sl}(n, K)})

Recall {\text{SL }(n, K) \subset \text{GL }(n, K)} is a Lie group of dimension {n^2-1}, hence its Lie algebra should have dimension {n^2-1}. To see what it is, let’s look at the special case {n=2} first: then

\displaystyle \text{SL }(2, K) = \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mid ad - bc = 1 \right\}.

Viewing this as a polynomial surface {f(a,b,c,d) = ad-bc} in {K^{\oplus 4}}, we compute

\displaystyle \nabla f = \left< d, -c, -b, a \right>

and in particular the tangent space to the identity matrix {\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}} is given by the orthogonal complement of the gradient

\displaystyle \nabla f (1,0,0,1) = \left< 1, 0, 0, 1 \right>.

Hence the tangent plane can be identified with matrices satisfying {a+d=0}. In other words, we see

\displaystyle \mathfrak{sl}(2, K) = \left\{ T \in \mathfrak{gl}(2, K) \mid \text{Tr } T = 0. \right\}.

By repeating this example in greater generality, we discover

\displaystyle \mathfrak{sl}(n, K) = \left\{ T \in \mathfrak{gl}(n, K) \mid \text{Tr } T = 0. \right\}.

4. The exponential map

Right now, {\mathfrak g} is just a vector space. However, by using the group structure we can get a map from {\mathfrak g} back into {G}. The trick is “differential equations”:

Proposition 14 (Differential equations for Lie theorists)

Let {G} be a Lie group over {K} and {\mathfrak g} its Lie algebra. Then for every {x \in \mathfrak g} there is a unique homomorphism

\displaystyle \gamma_x : K \rightarrow G

which is a morphism of Lie groups, such that

\displaystyle \gamma_x'(0) = x \in T_eG = \mathfrak g.

We will write {\gamma_x(t)} to emphasize the argument {t \in K} being thought of as “time”. Thus this proposition should be intuitively clear: the theory of differential equations guarantees that {\gamma_x} is defined and unique in a small neighborhood of {0 \in K}. Then, the group structure allows us to extend {\gamma_x} uniquely to the rest of {K}, giving a trajectory across all of {G}. This is sometimes called a one-parameter subgroup of {G}, but we won’t use this terminology anywhere in what follows.

This lets us define:

Definition 15

Retain the setting of the previous proposition. Then the exponential map is defined by

\displaystyle \exp : \mathfrak g \rightarrow G \qquad\text{by}\qquad x \mapsto \gamma_x(1).

The exponential map gets its name from the fact that for all the examples I discussed before, it is actually just the map {e^\bullet}. Note that below, {e^T = \sum_{k \ge 0} \frac{T^k}{k!}} for a matrix {T}; this is called the matrix exponential.

Example 16 (Exponential Maps of Lie algebras)

  • If {G = \mathbb R}, then {\mathfrak g = \mathbb R} too. We observe {\gamma_x(t) = e^{tx} \in \mathbb R} (where {t \in \mathbb R}) is a morphism of Lie groups {\gamma_x : \mathbb R \rightarrow G}. Hence

    \displaystyle \exp : \mathbb R \rightarrow \underbrace{\mathbb R}_{=G} \qquad \exp(x) = \gamma_x(1) = e^t \in \mathbb R = G.

  • Ditto for {\mathbb C}.
  • For {S^1} and {x \in \mathbb R}, the map {\gamma_x : \mathbb R \rightarrow S^1} given by {t \mapsto e^{itx}} works. Hence

    \displaystyle \exp : \mathbb R \rightarrow S^1 \qquad \exp(x) = \gamma_x(1) = e^{it} \in S^1.

  • For {\text{GL }(n, K)}, the map {\gamma_X : K \rightarrow \text{GL }(n, K)} given by {t \mapsto e^{tX}} works nicely (now {X} is a matrix). (Note that we have to check {e^{tX}} is actually invertible for this map to be well-defined.) Hence the exponential map is given by

    \displaystyle \exp : \mathfrak{gl}(n,K) \rightarrow \text{GL }(n,K) \qquad \exp(X) = \gamma_X(1) = e^X \in \text{GL }(n, K).

  • Similarly,

    \displaystyle \exp : \mathfrak{sl}(n,K) \rightarrow \text{SL }(n,K) \qquad \exp(X) = \gamma_X(1) = e^X \in \text{SL }(n, K).

    Here we had to check that if {X \in \mathfrak{sl}(n,K)}, meaning {\text{Tr } X = 0}, then {\det(e^X) = 1}. This can be seen by writing {X} in an upper triangular basis.

Actually, taking the tangent space at the identity is a functor. Consider a map {\varphi : G_1 \rightarrow G_2} of Lie groups, with lie algebras {\mathfrak g_1} and {\mathfrak g_2}. Because {\varphi} is a group homomorphism, {G_1 \ni e_1 \mapsto e_2 \in G_2}. Now, by manifold theory we know that maps {f : M \rightarrow N} between manifolds gives a linear map between the corresponding tangent spaces, say {Tf : T_pM \rightarrow T_{fp}N}. For us we obtain a linear map

\displaystyle \varphi_\ast = T \varphi : \mathfrak g_1 \rightarrow \mathfrak g_2.

In fact, this {\varphi_\ast} fits into a diagram

exp-commute

Here are a few more properties of {\exp}:

  • {\exp(0) = e \in G}, which is immediate by looking at the constant trajectory {\phi_0(t) \equiv e}.
  • {\exp'(x) = x \in \mathfrak g}, i.e. the total derivative {D\exp : \mathfrak g \rightarrow \mathfrak g} is the identity. This is again by construction.
  • In particular, by the inverse function theorem this implies that {\exp} is a diffeomorphism in a neighborhood of {0 \in \mathfrak g}, onto a neighborhood of {e \in G}.
  • {\exp} commutes with the commutator. (By the above diagram.)

5. The commutator

Right now {\mathfrak g} is still just a vector space, the tangent space. But now that there is map {\exp : \mathfrak g \rightarrow G}, we can use it to put a new operation on {\mathfrak g}, the so-called commutator.

The idea is follows: we want to “multiply” two elements of {\mathfrak g}. But {\mathfrak g} is just a vector space, so we can’t do that. However, {G} itself has a group multiplication, so we should pass to {G} using {\exp}, use the multiplication in {G} and then come back.

Here are the details. As we just mentioned, {\exp} is a diffeomorphism near {e \in G}. So for {x}, {y} close to the origin of {\mathfrak g}, we can look at {\exp(x)} and {\exp(y)}, which are two elements of {G} close to {e}. Multiplying them gives an element still close to {e}, so its equal to {\exp(z)} for some unique {z}, call it {\mu(x,y)}.

One can show in fact that {\mu} can be written as a Taylor series in two variables as

\displaystyle \mu(x,y) = x + y + \frac{1}{2} [x,y] + \text{third order terms} + \dots

where {[x,y]} is a skew-symmetric bilinear map, meaning {[x,y] = -[y,x]}. It will be more convenient to work with {[x,y]} than {\mu(x,y)} itself, so we give it a name:

Definition 17

This {[x,y]} is called the commutator of {G}.

Now we know multiplication in {G} is associative, so this should give us some nontrivial relation on the bracket {[,]}. Specifically, since

\displaystyle \exp(x) \left( \exp(y) \exp(z) \right) = \left( \exp(x) \exp(y) \right) \exp(z).

we should have that {\mu(x, \mu(y,z)) = \mu(\mu(x,y), z)}, and this should tell us something. In fact, the claim is:

Theorem 18

The bracket {[,]} satisfies the Jacobi identity

\displaystyle [x,[y,z]] + [y,[z,x]] + [z,[x,y]] = 0.

Proof: Although I won’t prove it, the third-order terms (and all the rest) in our definition of {[x,y]} can be written out explicitly as well: for example, for example, we actually have

\displaystyle \mu(x,y) = x + y + \frac{1}{2} [x,y] + \frac{1}{12} \left( [x, [x,y]] + [y,[y,x]] \right) + \text{fourth order terms} + \dots.

The general formula is called the Baker-Campbell-Hausdorff formula.

Then we can force ourselves to expand this using the first three terms of the BCS formula and then equate the degree three terms. The left-hand side expands initially as {\mu\left( x, y + z + \frac{1}{2} [y,z] + \frac{1}{12} \left( [y,[y,z]] + [z,[z,y] \right) \right)}, and the next step would be something ugly.

This computation is horrifying and painful, so I’ll pretend I did it and tell you the end result is as claimed. \Box
There is a more natural way to see why this identity is the “right one”; see Qiaochu. However, with this proof I want to make the point that this Jacobi identity is not our decision: instead, the Jacobi identity is forced upon us by associativity in {G}.

Example 19 (Examples of commutators attached to Lie groups)

  • If {G} is an abelian group, we have {-[y,x] = [x,y]} by symmetry and {[x,y] = [y,x]} from {\mu(x,y) = \mu(y,x)}. Thus {[x,y] = 0} in {\mathfrak g} for any abelian Lie group {G}.
  • In particular, the brackets for {G \in \{\mathbb R, \mathbb C, S^1\}} are trivial.
  • Let {G = \text{GL }(n, K)}. Then one can show that

    \displaystyle [T,S] = TS - ST \qquad \forall S, T \in \mathfrak{gl}(n, K).

  • Ditto for {\text{SL }(n, K)}.

In any case, with the Jacobi identity we can define an general Lie algebra as an intrinsic object with a Jacobi-satisfying bracket:

Definition 20

A Lie algebra over {k} is a {k}-vector space equipped with a skew-symmetric bilinear bracket {[,]} satisfying the Jacobi identity.

A morphism of Lie algebras and preserves the bracket.

Note that a Lie algebra may even be infinite-dimensional (even though we are assuming {G} is finite-dimensional, so that they will never come up as a tangent space).

Example 21 (Associative algebra {\rightarrow} Lie algebra)

Any associative algebra {A} over {k} can be made into a Lie algebra by taking the same underlying vector space, and using the bracket {[a,b] = ab - ba}.

6. The fundamental theorems

We finish this list of facts by stating the three “fundamental theorems” of Lie theory. They are based upon the functor

\displaystyle \mathscr{L} : G \mapsto T_e G

we have described earlier, which is a functor

  • from the category of Lie groups
  • into the category of finite-dimensional Lie algebras.

The first theorem requires the following definition:

Definition 22

A Lie subgroup {H} of a Lie group {G} is a subgroup {H} such that the inclusion map {H \hookrightarrow G} is also an injective immersion.

A Lie subalgebra {\mathfrak h} of a Lie algebra {\mathfrak g} is a vector subspace preserved under the bracket (meaning that {[\mathfrak h, \mathfrak h] \subseteq \mathfrak h]}).

Theorem 23 (Lie I)

Let {G} be a real or complex Lie group with Lie algebra {\mathfrak g}. Then given a Lie subgroup {H \subseteq G}, the map

\displaystyle H \mapsto \mathscr{L}(H) \subseteq \mathfrak g

is a bijection between Lie subgroups of {G} and Lie subalgebras of {\mathfrak g}.

Theorem 24 (The Lie functor is an equivalence of categories)

Restrict {\mathscr{L}} to a functor

  • from the category of simply connected Lie groups over {K}
  • to the category of finite-dimensional Lie algebras over {K}.

Then

  1. (Lie II) {\mathscr{L}} is fully faithful, and
  2. (Lie III) {\mathscr{L}} is essentially surjective on objects.

If we drop the “simply connected” condition, we obtain a functor which is faithful and exact, but not full: non-isomorphic Lie groups can have isomorphic Lie algebras (one example is {\text{SO }(3)} and {\text{SU }(2)}).

Combinatorial Nullstellensatz and List Coloring

More than six months late, but here are notes from the combinatorial nullsetllensatz talk I gave at the student colloquium at MIT. This was also my term paper for 18.434, “Seminar in Theoretical Computer Science”.

1. Introducing the choice number

One of the most fundamental problems in graph theory is that of a graph coloring, in which one assigns a color to every vertex of a graph so that no two adjacent vertices have the same color. The most basic invariant related to the graph coloring is the chromatic number:

Definition 1

A simple graph {G} is {k}-colorable if it’s possible to properly color its vertices with {k} colors. The smallest such {k} is the chromatic number {\chi(G)}.

In this exposition we study a more general notion in which the set of permitted colors is different for each vertex, as long as at least {k} colors are listed at each vertex. This leads to the notion of a so-called choice number, which was introduced by Erdös, Rubin, and Taylor.

Definition 2

A simple graph {G} is {k}-choosable if its possible to properly color its vertices given a list of {k} colors at each vertex. The smallest such {k} is the choice number {\mathop{\mathrm{ch}}(G)}.

Example 3

We have {\mathop{\mathrm{ch}}(C_{2n}) = \chi(C_{2n}) = 2} for any integer {n} (here {C_{2n}} is the cycle graph on {2n} vertices). To see this, we only have to show that given a list of two colors at each vertex of {C_{2n}}, we can select one of them.

  • If the list of colors is the same at each vertex, then since {C_{2n}} is bipartite, we are done.
  • Otherwise, suppose adjacent vertices {v_1}, {v_{2n}} are such that some color at {c} is not in the list at {v_{2n}}. Select {c} at {v_1}, and then greedily color in {v_2}, \dots, {v_{2n}} in that order.

We are thus naturally interested in how the choice number and the chromatic number are related. Of course we always have

\displaystyle \mathop{\mathrm{ch}}(G) \ge \chi(G).

Näively one might expect that we in fact have an equality, since allowing the colors at vertices to be different seems like it should make the graph easier to color. However, the following example shows that this is not the case.

Example 4 (Erdös)

Let {n \ge 1} be an integer and define

\displaystyle G = K_{n^n, n}.

We claim that for any integer {n \ge 1} we have

\displaystyle \mathop{\mathrm{ch}}(G) \ge n+1 \quad\text{and}\quad \chi(G) = 2.

The latter equality follows from {G} being partite.

Now to see the first inequality, let {G} have vertex set {U \cup V}, where {U} is the set of functions {u : [n] \rightarrow [n]} and {V = [n]}. Then consider {n^2} colors {C_{i,j}} for {1 \le i, j \le n}. On a vertex {u \in U}, we list colors {C_{1,u(1)}}, {C_{2,u(2)}}, \dots, {C_{n,u(n)}}. On a vertex {v \in V}, we list colors {C_{v,1}}, {C_{v,2}}, \dots, {C_{v,n}}. By construction it is impossible to properly color {G} with these colors.

The case {n = 3} is illustrated in the figure below (image in public domain).

K-3-27

This surprising behavior is the subject of much research: how can we bound the choice number of a graph as a function of its chromatic number and other properties of the graph? We see that the above example requires exponentially many vertices in {n}.

Theorem 5 (Noel, West, Wu, Zhu)

If {G} is a graph with {n} vertices then

\displaystyle \chi(G) \le \mathop{\mathrm{ch}}(G) \le \max\left( \chi(G), \left\lceil \frac{\chi(G)+n-1}{3} \right\rceil \right).

In particular, if {n \le 2\chi(G)+1} then {\mathop{\mathrm{ch}}(G) = \chi(G)}.

One of the most major open problems in this direction is the following.

Definition 6

A claw-free graph is a graph with no induced {K_{3,1}}. For example, the line graph (also called edge graph) of any simple graph {G} is claw-free.

If {G} is a claw-free graph, then {\mathop{\mathrm{ch}}(G) = \chi(G)}. In particular, this conjecture implies that for edge coloring, the notions of “chromatic number” and “choice number” coincide.

 

In this exposition, we prove the following result of Alon.

Theorem 7 (Alon)

A bipartite graph {G} is {\left\lfloor L(G) \right\rfloor+1} choosable, where

\displaystyle L(G) \overset{\mathrm{def}}{=} \max_{H \subseteq G} |E(H)|/|V(H)|

is half the maximum of the average degree of subgraphs {H}.

In particular, recall that a planar bipartite graph {H} with {r} vertices contains at most {2r-4} edges. Thus for such graphs we have {L(G) \le 2} and deduce:

Corollary 8

A planar bipartite graph is {3}-choosable.

This corollary is sharp, as it applies to {K_{2,4}} which we have seen in Example 4 has {\mathop{\mathrm{ch}}(K_{2,4}) = 3}.

The rest of the paper is divided as follows. First, we begin in §2 by stating Theorem 9, the famous combinatorial nullstellensatz of Alon. Then in §3 and §4, we provide descriptions of the so-called graph polynomial, to which we then apply combinatorial nullstellensatz to deduce Theorem 18. Finally in §5, we show how to use Theorem 18 to prove Theorem 7.

2. Combinatorial Nullstellensatz

The main tool we use is the Combinatorial Nullestellensatz of Alon.

Theorem 9 (Combinatorial Nullstellensatz)

Let {F} be a field, and let {f \in F[x_1, \dots, x_n]} be a polynomial of degree {t_1 + \dots + t_n}. Let {S_1, S_2, \dots, S_n \subseteq F} such that {\left\lvert S_i \right\rvert > t_i} for all {i}.

Assume the coefficient of {x_1^{t_1}x_2^{t_2}\dots x_n^{t_n}} of {f} is not zero. Then we can pick {s_1 \in S_1}, \dots, {s_n \in S_n} such that

\displaystyle f(s_1, s_2, \dots, s_n) \neq 0.

Example 10

Let us give a second proof that

\displaystyle \mathop{\mathrm{ch}}(C_{2n}) = 2

for every positive integer {n}. Our proof will be an application of the Nullstellensatz.

Regard the colors as real numbers, and let {S_i} be the set of colors at vertex {i} (hence {1 \le i \le 2n}, and {|S_i| = 2}). Consider the polynomial

\displaystyle f = \left( x_1-x_2 \right)\left( x_2-x_3 \right) \dots \left( x_{2n-1}-x_{2n} \right)\left( x_{2n}-x_1 \right)

The coefficient of {x_1^1 x_2^1 \dots x_{2n}^1} is {2 \neq 0}. Therefore, one can select a color from each {S_i} so that {f} does not vanish.

3. The Graph Polynomial, and Directed Orientations

Motivated by Example 10, we wish to apply a similar technique to general graphs {G}. So in what follows, let {G} be a (simple) graph with vertex set {\{1, \dots, n\}}.

Definition 11

The graph polynomial of {G} is defined by

\displaystyle f_G(x_1, \dots, x_n) = \prod_{\substack{(i,j) \in E(G) \\ i < j}} (x_i-x_j).

We observe that coefficients of {f_G} correspond to differences in directed orientations. To be precise, we introduce the notation:

Definition 12

Consider orientations on the graph {G} with vertex set {\{1, \dots, n\}}, meaning we assign a direction {v \rightarrow w} to every edge of {G} to make it into a directed graph {G}. An oriented edge is called ascending if {v \rightarrow w} and {v \le w}, i.e. the edge points from the smaller number to the larger one.

Then we say that an orientation is

  • even if there are an even number of ascending edges, and
  • odd if there are an odd number of ascending edges.

Finally, we define

  • {\mathop{\mathrm{DE}}_G(d_1, \dots, d_n)} to the be set of all even orientations of {G} in which vertex {i} has indegree {d_i}.
  • {\mathop{\mathrm{DO}}_G(d_1, \dots, d_n)} to the be set of all odd orientations of {G} in which vertex {i} has indegree {d_i}.

Set {\mathop{\mathrm{D}}_G(d_1,\dots,d_n) = \mathop{\mathrm{DE}}_G(d_1,\dots,d_n) \cup \mathop{\mathrm{DO}}_G(d_1,\dots,d_n)}.

Example 13

Consider the following orientation:

even-orientationThere are exactly two ascending edges, namely {1 \rightarrow 2} and {2 \rightarrow 4}. The indegrees of are {d_1 = 0}, {d_2 = 2} and {d_3 = d_4 = 1}. Therefore, this particular orientation is an element of {\mathop{\mathrm{DE}}_G(0,2,1,1)}. In terms of {f_G}, this corresponds to the choice of terms

\displaystyle \left( x_1- \boldsymbol{x_2} \right) \left( \boldsymbol{x_2}-x_3 \right) \left( x_2-\boldsymbol{x_4} \right) \left( \boldsymbol{x_3}-x_4 \right)

which is a {+ x_2^2 x_3 x_4} term.

Lemma 14

In the graph polynomial of {G}, the coefficient of {x_1^{d_1} \dots x_n^{d_n}} is

\displaystyle \left\lvert \mathop{\mathrm{DE}}_G(d_1, \dots, d_n) \right\rvert - \left\lvert \mathop{\mathrm{DO}}_G(d_1, \dots, d_n) \right\rvert.

Proof: Consider expanding {f_G}. Then each expanded term corresponds to a choice of {x_i} or {x_j} from each {(i,j)}, as in Example 13. The term has coefficient {+1} is the orientation is even, and {-1} if the orientation is odd, as desired. \Box

Thus we have an explicit combinatorial description of the coefficients in the graph polynomial {f_G}.

4. Coefficients via Eulerian Suborientations

We now give a second description of the coefficients of {f_G}.

Definition 15

Let {D \in \mathop{\mathrm{D}}_G(d_1, \dots, d_n)}, viewed as a directed graph. An Eulerian suborientation of {D} is a subgraph of {D} (not necessarily induced) in which every vertex has equal indegree and outdegree. We say that such a suborientation is

  • even if it has an even number of edges, and
  • odd if it has an odd number of edges.

Note that the empty suborientation is allowed. We denote the even and odd Eulerian suborientations of {D} by {\mathop{\mathrm{EE}}(D)} and {\mathop{\mathrm{EO}}(D)}, respectively.

Eulerian suborientations are brought into the picture by the following lemma.

Lemma 16

Assume {D \in \mathop{\mathrm{DE}}_G(d_1, \dots, d_n)}. Then there are natural bijections

\displaystyle \begin{aligned} \mathop{\mathrm{DE}}_G(d_1, \dots, d_n) &\rightarrow \mathop{\mathrm{EE}}(D) \\ \mathop{\mathrm{DO}}_G(d_1, \dots, d_n) &\rightarrow \mathop{\mathrm{EO}}(D). \end{aligned}

Similarly, if {D \in \mathop{\mathrm{DO}}_G(d_1, \dots, d_n)} then there are bijections

\displaystyle \begin{aligned} \mathop{\mathrm{DE}}_G(d_1, \dots, d_n) &\rightarrow \mathop{\mathrm{EO}}(D) \\ \mathop{\mathrm{DO}}_G(d_1, \dots, d_n) &\rightarrow \mathop{\mathrm{EE}}(D). \end{aligned}

Proof: Consider any orientation {D' \in \mathop{\mathrm{D}}_G(d_1, \dots, d_n)}, Then we define a suborietation of {D}, denoted {D \rtimes D'}, by including exactly the edges of {D} whose orientation in {D'} is in the opposite direction. It’s easy to see that this induces a bijection

\displaystyle D \rtimes - : \mathop{\mathrm{D}}_G(d_1, \dots, d_n) \rightarrow \mathop{\mathrm{EE}}(D) \cup \mathop{\mathrm{EO}}(D)

Moreover, remark that

  • {D \rtimes D'} is even if {D} and {D'} are either both even or both odd, and
  • {D \rtimes D'} is odd otherwise.

The lemma follows from this. \Box

Corollary 17

In the graph polynomial of {G}, the coefficient of {x_1^{d_1} \dots x_n^{d_n}} is

\displaystyle \pm \left( \left\lvert \mathop{\mathrm{EE}}(D) \right\rvert - \left\lvert \mathop{\mathrm{EO}}(D) \right\rvert \right)

where {D \in \mathop{\mathrm{D}}_G(d_1, \dots, d_n)} is arbitrary.

Proof: Combine Lemma 14 and Lemma 16. \Box

We now arrive at the main result:

Theorem 18

Let {G} be a graph on {\{1, \dots, n\}}, and let {D \in \mathop{\mathrm{D}}_G(d_1, \dots, d_n)} be an orientation of {G}. If {\left\lvert \mathop{\mathrm{EE}}(D) \right\rvert \neq \left\lvert \mathop{\mathrm{EO}}(D) \right\rvert}, then given a list of {d_i+1} colors at each vertex of {G}, there exists a proper coloring of the vertices of {G}.

In particular, {G} is {(1+\max_i d_i)}-choosable.

Proof: Combine Corollary 17 with Theorem 9. \Box

5. Finding an orientation

Armed with Theorem 18, we are almost ready to prove Theorem 7. The last ingredient is that we need to find an orientation on {G} in which the maximal degree is not too large. This is accomplished by the following.

Lemma 19

Let {L(G) \overset{\mathrm{def}}{=} \max_{H \subseteq G} |E(H)|/|V(H)|} as in Theorem 7. Then {G} has an orientation in which every indegree is at most {\left\lceil L(G) \right\rceil}.

Proof: This is an application of Hall’s marriage theorem.

Let {d = \left\lceil L(G) \right\rceil \ge L(G)}. Construct a bipartite graph

\displaystyle E \cup X \qquad \text{where}\qquad E = E(G) \quad\text{ and }\quad X = \underbrace{V(G) \sqcup \dots \sqcup V(G)}_{d \text{ times}}.

Connect {e \in E} and {v \in X} if {v} is an endpoint of {e}. Since {d \ge L(G)} we satisfy Hall’s condition (as {L(G)} is a condition for all subgraphs {H \subseteq G}) and can match each edge in {E} to a (copy of some) vertex in {X}. Since there are exactly {d} copies of each vertex in {X}, the conclusion follows. \Box

Now we can prove Theorem 7. Proof: According to Lemma 19, pick {D \in \mathop{\mathrm{D}}_G(d_1, \dots, d_n)} where {\max d_i \le \left\lceil L(G) \right\rceil}. Since {G} is bipartite, we obviously have {\mathop{\mathrm{EO}}(D) = \varnothing}, since {G} cannot have any odd cycles. So Theorem 18 applies and we are done. \Box

Algebraic Topology Functors

This will be old news to anyone who does algebraic topology, but oddly enough I can’t seem to find it all written in one place anywhere, and in particular I can’t find the bit about {\mathsf{hPairTop}} at all.

In algebraic topology you (for example) associate every topological space {X} with a group, like {\pi_1(X, x_0)} or {H_5(X)}. All of these operations turn out to be functors. This isn’t surprising, because as far as I’m concerned the definition of a functor is “any time you take one type of object and naturally make another object”.

The surprise is that these objects also respect homotopy in a nice way; proving this is a fair amount of the “setup” work in algebraic topology.

1. Homology, {H_n : \mathsf{hTop} \rightarrow \mathsf{Grp}}

Note that {H_5} is a functor

\displaystyle H_5 : \mathsf{Top} \rightarrow \mathsf{Grp}

i.e. to every space {X} we can associate a group {H_5(X)}. (Of course, replace {5} by integer of your choice.) Recall that:

Definition 1

Two maps {f, g : X \rightarrow Y} are homotopy equivalent if there exists a homotopy between them.

Thus for a map we can take its homotopy class {[f]} (the equivalence class under this relationship). This has the nice property that {[f \circ g] = [f] \circ [g]} and so on.

Definition 2

Two spaces {X} and {Y} are homotopic if there exists a pair of maps {f : X \rightarrow Y} and {g : Y \rightarrow X} such that {[f \circ g] = [\mathrm{id}_X]} and {[g \circ f] = [\mathrm{id}_Y]}.

In light of this, we can define

Definition 3

The category {\mathsf{hTop}} is defined as follows:

  • The objects are topological spaces {X}.
  • The morphisms {X \rightarrow Y} are homotopy classes of continuous maps {X \rightarrow Y}.

Remark 4

Composition is well-defined since {[f \circ g] = [f] \circ [g]}. Two spaces are isomorphic in {\mathsf{hTop}} if they are homotopic.

Remark 5

As you might guess this “quotient” construction is called a quotient category.

Then the big result is that:

Theorem 6

The induced map {f_\sharp = H_n(f)} of a map {f: X \rightarrow Y} depends only on the homotopy class of {f}. Thus {H_n} is a functor

\displaystyle H_n : \mathsf{hTop} \rightarrow \mathsf{Grp}.

The proof of this is geometric, using the so-called prism operators. In any case, as with all functors we deduce

Corollary 7

{H_n(X) \cong H_n(Y)} if {X} and {Y} are homotopic.

In particular, the contractible spaces are those spaces {X} which are homotopy equivalent to a point. In which case, {H_n(X) = 0} for all {n \ge 1}.

2. Relative Homology, {H_n : \mathsf{hPairTop} \rightarrow \mathsf{Grp}}

In fact, we also defined homology groups

\displaystyle H_n(X,A)

for {A \subseteq X}. We will now show this is functorial too.

Definition 8

Let {\varnothing \neq A \subset X} and {\varnothing \neq B \subset X} be subspaces, and consider a map {f : X \rightarrow Y}. If {f(A) \subseteq B} we write

\displaystyle f : (X,A) \rightarrow (Y,B).

We say {f} is a map of pairs, between the pairs {(X,A)} and {(Y,B)}.

Definition 9

We say that {f,g : (X,A) \rightarrow (Y,B)} are pair-homotopic if they are “homotopic through maps of pairs”.

More formally, a pair-homotopy {f, g : (X,A) \rightarrow (Y,B)} is a map {F : [0,1] \times X \rightarrow Y}, which we’ll write as {F_t(X)}, such that {F} is a homotopy of the maps {f,g : X \rightarrow Y} and each {F_t} is itself a map of pairs.

Thus, we naturally arrive at two categories:

  • {\mathsf{PairTop}}, the category of pairs of toplogical spaces, and
  • {\mathsf{hPairTop}}, the same category except with maps only equivalent up to homotopy.

Definition 10

As before, we say pairs {(X,A)} and {(Y,B)} are pair-homotopy equivalent if they are isomorphic in {\mathsf{hPairTop}}. An isomorphism of {\mathsf{hPairTop}} is a pair-homotopy equivalence.

Then, the prism operators now let us derive

Theorem 11

We have a functor

\displaystyle H_n : \mathsf{hPairTop} \rightarrow \mathsf{Grp}.

The usual corollaries apply.

Now, we want an analog of contractible spaces for our pairs: i.e. pairs of spaces {(X,A)} such that {H_n(X,A) = 0} for {n \ge 1}. The correct definition is:

Definition 12

Let {A \subset X}. We say that {A} is a deformation retract of {X} if there is a map of pairs {r : (X, A) \rightarrow (A, A)} which is a pair homotopy equivalence.

Example 13 (Examples of Deformation Retracts)

  1. If a single point {p} is a deformation retract of a space {X}, then {X} is contractible, since the retraction {r : X \rightarrow \{\ast\}} (when viewed as a map {X \rightarrow X}) is homotopic to the identity map {\mathrm{id}_X : X \rightarrow X}.
  2. The punctured disk {D^2 \setminus \{0\}} deformation retracts onto its boundary {S^1}.
  3. More generally, {D^{n} \setminus \{0\}} deformation retracts onto its boundary {S^{n-1}}.
  4. Similarly, {\mathbb R^n \setminus \{0\}} deformation retracts onto a sphere {S^{n-1}}.

Of course in this situation we have that

\displaystyle H_n(X,A) \cong H_n(A,A) = 0.

3. Homotopy, {\pi_1 : \mathsf{hTop}_\ast \rightarrow \mathsf{Grp}}

As a special case of the above, we define

Definition 14

The category {\mathsf{Top}_\ast} is defined as follows:

  • The objects are pairs {(X, x_0)} of spaces {X} with a distinguished basepoint {x_0}. We call these pointed spaces.
  • The morphisms are maps {f : (X, x_0) \rightarrow (Y, y_0)}, meaning {f} is continuous and {f(x_0) = y_0}.

Now again we mod out:

Definition 15

Two maps {f , g : (X, x_0) \rightarrow (Y, y_0)} of pointed spaces are homotopic if there is a homotopy between them which also fixes the basepoints. We can then, in the same way as before, define the quotient category {\mathsf{hTop}_\ast}.

And lo and behold:

Theorem 16

We have a functor

\displaystyle \pi_1 : \mathsf{hTop}_\ast \rightarrow \mathsf{Grp}.

Same corollaries as before.

A Sketchy Overview of Green-Tao

These are the notes of my last lecture in the 18.099 discrete analysis seminar. It is a very high-level overview of the Green-Tao theorem. It is a subset of this paper.

1. Synopsis

This post as in overview of the proof of:

Theorem 1 (Green-Tao)

The prime numbers contain arbitrarily long arithmetic progressions.

Here, Szemerédi’s theorem isn’t strong enough, because the primes have density approaching zero. Instead, one can instead try to prove the following “relativity” result.

Theorem (Relative Szemerédi)

Let {S} be a sparse “pseudorandom” set of integers. Then subsets of {A} with positive density in {S} have arbitrarily long arithmetic progressions.

In order to do this, we have to accomplish the following.

  • Make precise the notion of “pseudorandom”.
  • Prove the Relative Szemerédi theorem, and then
  • Exhibit a “pseudorandom” set {S} which subsumes the prime numbers.

This post will use the graph-theoretic approach to Szemerédi as in the exposition of David Conlon, Jacob Fox, and Yufei Zhao. In order to motivate the notion of pseudorandom, we return to the graph-theoretic approach of Roth’s theorem, i.e. the case {k=3} of Szemerédi’s theorem.

2. Defining the linear forms condition

2.1. Review of Roth theorem

Roth’s theorem can be phrased in two ways. The first is the “set-theoretic” formulation:

Theorem 2 (Roth, set version)

If {A \subseteq \mathbb Z/N} is 3-AP-free, then {|A| = o(N)}.

The second is a “weighted” version

Theorem 3 (Roth, weighted version)

Fix {\delta > 0}. Let {f : \mathbb Z/N \rightarrow [0,1]} with {\mathbf E f \ge \delta}. Then

\displaystyle \Lambda_3(f,f,f) \ge \Omega_\delta(1).

We sketch the idea of a graph-theoretic proof of the first theorem. We construct a tripartite graph {G_A} on vertices {X \sqcup Y \sqcup Z}, where {X = Y = Z = \mathbb Z/N}. Then one creates the edges

  • {(x,y)} if {2x+ y \in A},
  • {(x,z)} if {x-z \in A}, and
  • {(y,z)} if {-y-2z \in A}.

This construction is selected so that arithmetic progressions in {A} correspond to triangles in the graph {G_A}. As a result, if {A} has no 3-AP’s (except trivial ones, where {x+y+z=0}), the graph {G_A} has exactly one triangle for every edge. Then, we can use the theorem of Ruzsa-Szemerédi, which states that this graph {G_A} has {o(n^2)} edges.

2.2. The measure {\nu}

Now for the generalized version, we start with the second version of Roth’s theorem. Instead of a set {S}, we consider a function

\displaystyle \nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}

which we call a majorizing measure. Since we are now dealing with {A} of low density, we normalize {\nu} so that

\displaystyle \mathbf E[\nu] = 1 + o(1).

Our goal is to now show a result of the form:

Theorem (Relative Roth, informally, weighted version)

If {0 \le f \le \nu}, {\mathbf E f \ge \delta}, and {\nu} satisfies a “pseudorandom” condition, then {\Lambda_3(f,f,f) \ge \Omega_{\delta}(1)}.

The prototypical example of course is that if {A \subset S \subset \mathbb Z_N}, then we let {\nu(x) = \frac{N}{|S|} 1_S(x)}.

2.3. Pseudorandomness for {k=3}

So, how can we put the pseudorandom condition? Initially, consider {G_S} the tripartite graph defined earlier, and let {p = |S| / N}; since {S} is sparse we expect {p} small. The main idea that turns out to be correct is: The number of embeddings of {K_{2,2,2}} in {S} is “as expected”, namely {(1+o(1)) p^{12} N^6}. Here {K_{2,2,2}} is actually the {2}-blow-up of a triangle. This condition thus gives us control over the distribution of triangles in the sparse graph {G_S}: knowing that we have approximately the correct count for {K_{2,2,2}} is enough to control distribution of triangles.

For technical reasons, in fact we want this to be true not only for {K_{2,2,2}} but all of its subgraphs {H}.

Now, let’s move on to the weighted version. Let’s consider a tripartite graph {G}, which we can think of as a collection of three functions

\displaystyle \begin{aligned} \mu_{-z} &: X \times Y \rightarrow \mathbb R \\ \mu_{-y} &: X \times Z \rightarrow \mathbb R \\ \mu_{-x} &: Y \times Z \rightarrow \mathbb R. \end{aligned}

We think of {\mu} as normalized so that {\mathbf E[\mu_{-x}] = \mathbf E[\mu_{-y}] = \mathbf E[\mu_{-z}] = 1}. Then we can define

Definition 4

A weighted tripartite graph {\mu = (\mu_{-x}, \mu_{-y}, \mu_{-z})} satisfies the {3}-linear forms condition if

\displaystyle \begin{aligned} \mathbf E_{x^0,x^1,y^0,y^1,z^0,z^1} &\Big[ \mu_{-x}(y^0,z^0) \mu_{-x}(y^0,z^1) \mu_{-x}(y^1,z^0) \mu_{-x}(y^1,z^1) \\ & \mu_{-y}(x^0,z^0) \mu_{-y}(x^0,z^1) \mu_{-y}(x^1,z^0) \mu_{-y}(x^1,z^1) \\ & \mu_{-z}(x^0,y^0) \mu_{-z}(x^0,y^1) \mu_{-z}(x^1,y^0) \mu_{-z}(x^1,y^1) \Big] \\ &= 1 + o(1) \end{aligned}

and similarly if any of the twelve factors are deleted.

Then the pseudorandomness condition is according to the graph we defined above:

Definition 5

A function {\nu : \mathbb Z / N \rightarrow \mathbb Z} is satisfies the {3}-linear forms condition if {\mathbf E[\nu] = 1 + o(1)}, and the tripartite graph {\mu = (\mu_{-x}, \mu_{-y}, \mu_{-z})} defined by

\displaystyle \begin{aligned} \mu_{-z} &= \nu(2x+y) \\ \mu_{-y} &= \nu(x-z) \\ \mu_{-x} &= \nu(-y-2z) \end{aligned}

satisfies the {3}-linear forms condition.

Finally, the relative version of Roth’s theorem which we seek is:

Theorem 6 (Relative Roth)

Suppose {\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}} satisfies the {3}-linear forms condition. Then for any {f : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}} bounded above by {\nu} and satisfying {\mathbf E[f] \ge \delta > 0}, we have

\displaystyle \Lambda_3(f,f,f) \ge \Omega_{\delta}(1).

2.4. Relative Szemerédi

We of course have:

Theorem 7 (Szemerédi)

Suppose {k \ge 3}, and {f : \mathbb Z/n \rightarrow [0,1]} with {\mathbf E[f] \ge \delta}. Then

\displaystyle \Lambda_k(f, \dots, f) \ge \Omega_{\delta}(1).

For {k > 3}, rather than considering weighted tripartite graphs, we consider a {(k-1)}-uniform {k}-partite hypergraph. For example, given {\nu} with {\mathbf E[\nu] = 1 + o(1)} and {k=4}, we use the construction

\displaystyle \begin{aligned} \mu_{-z}(w,x,y) &= \nu(3w+2x+y) \\ \mu_{-y}(w,x,z) &= \nu(2w+x-z) \\ \mu_{-x}(w,y,z) &= \nu(w-y-2z) \\ \mu_{-w}(x,y,z) &= \nu(-x-2y-3z). \end{aligned}

Thus 4-AP’s correspond to the simplex {K_4^{(3)}} (i.e. a tetrahedron). We then consider the two-blow-up of the simplex, and require the same uniformity on subgraphs of {H}.

Here is the compiled version:

Definition 8

A {(k-1)}-uniform {k}-partite weighted hypergraph {\mu = (\mu_{-i})_{i=1}^k} satisfies the {k}-linear forms condition if

\displaystyle \mathbf E_{x_1^0, x_1^1, \dots, x_k^0, x_k^1} \left[ \prod_{j=1}^k \prod_{\omega \in \{0,1\}^{[k] \setminus \{j\}}} \mu_{-j}\left( x_1^{\omega_1}, \dots, x_{j-1}^{\omega_{j-1}}, x_{j+1}^{\omega_{j+1}}, \dots, x_k^{\omega_k} \right)^{n_{j,\omega}} \right] = 1 + o(1)

for all exponents {n_{j,w} \in \{0,1\}}.

Definition 9

A function {\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}} satisfies the {k}-linear forms condition if {\mathbf E[\nu] = 1 + o(1)}, and

\displaystyle \mathbf E_{x_1^0, x_1^1, \dots, x_k^0, x_k^1} \left[ \prod_{j=1}^k \prod_{\omega \in \{0,1\}^{[k] \setminus \{j\}}} \nu\left( \sum_{i=1}^k (j-i)x_i^{(\omega_i)} \right)^{n_{j,\omega}} \right] = 1 + o(1)

for all exponents {n_{j,w} \in \{0,1\}}. This is just the previous condition with the natural {\mu} induced by {\nu}.

The natural generalization of relative Szemerédi is then:

Theorem 10 (Relative Szemerédi)

Suppose {k \ge 3}, and {\nu : \mathbb Z/n \rightarrow \mathbb R_{\ge 0}} satisfies the {k}-linear forms condition. Let {f : \mathbb Z/N to \mathbb R_{\ge 0}} with {\mathbf E[f] \ge \delta}, {f \le \nu}. Then

\displaystyle \Lambda_k(f, \dots, f) \ge \Omega_{\delta}(1).

3. Outline of proof of Relative Szemerédi

The proof of Relative Szeremédi uses two key facts. First, one replaces {f} with a bounded {\widetilde f} which is near it:

Theorem 11 (Dense model)

Let {\varepsilon > 0}. There exists {\varepsilon' > 0} such that if:

  • {\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}} satisfies {\left\lVert \nu-1 \right\rVert^{\square}_r \le \varepsilon'}, and
  • {f : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}, {f \le \nu}

then there exists a function {\widetilde f : \mathbb Z/N \rightarrow [0,1]} such that {\left\lVert f - \widetilde f \right\rVert^{\square}_r \le \varepsilon}.

Here we have a new norm, called the cut norm, defined by

\displaystyle \left\lVert f \right\rVert^{\square}_r = \sup_{A_i \subseteq (\mathbb Z/N)^{r-1}} \left\lvert \mathbf E_{x_1, \dots, x_r} f(x_1 + \dots + x_r) 1_{A_1}(x_{-1}) \dots 1_{A_r}(x_{-r}) \right\rvert.

This is actually an extension of the cut norm defined on a {r}-uniform {r}-partite hypergraph (not {(r-1)}-uniform like before!): if {g : X_1 \times \dots \times X_r \rightarrow \mathbb R} is such a graph, we let

\displaystyle \left\lVert g \right\rVert^{\square}_{r,r} = \sup_{A_i \subseteq X_{-i}} \left\lvert g(x_1, \dots, x_r) 1_{A_1}(x_{-1}) \dots 1_{A_r}(x_{-r}) \right\rvert.

Taking {g(x_1, \dots, x_r) = f(x_1 + \dots + x_r)}, {X_1 = \dots = X_r = \mathbb Z/N} gives the analogy.

For the second theorem, we define the norm

\displaystyle \left\lVert g \right\rVert^{\square}_{k-1,k} = \max_{i=1,\dots,k} \left( \left\lVert g_{-i} \right\rVert^{\square}_{k-1, k-1} \right).

Theorem 12 (Relative simplex counting lemma)

Let {\mu}, {g}, {\widetilde g} be weighted {(k-1)}-uniform {k}-partite weighted hypergraphs on {X_1 \cup \dots \cup X_k}. Assume that {\mu} satisfies the {k}-linear forms condition, and {0 \le g_{-i} \le \mu_{-i}} for all {i}, {0 \le \widetilde g \le 1}. If {\left\lVert g-\widetilde g \right\rVert^{\square}_{k-1,k} = o(1)} then

\displaystyle \mathbf E_{x_1, \dots, x_k} \left[ g(x_{-1}) \dots g(x_{-k}) - \widetilde g(x_{-1}) \dots \widetilde g(x_{-k}) \right] = o(1).

One then combines these two results to prove Szemerédi, as follows. Start with {f} and {\nu} in the theorem. The {k}-linear forms condition turns out to imply {\left\lVert \nu-1 \right\rVert^{\square}_{k-1} = o(1)}. So we can find a nearby {\widetilde f} by the dense model theorem. Then, we induce {\nu}, {g}, {\widetilde g} from {\mu}, {f}, {\widetilde f} respectively. The counting lemma then reduce the bounding of {\Lambda_k(f, \dots, f)} to the bounding of {\Lambda_k(\widetilde f, \dots, \widetilde f)}, which is {\Omega_\delta(1)} by the usual Szemerédi theorem.

4. Arithmetic progressions in primes

We now sketch how to obtain Green-Tao from Relative Szemerédi. As expected, we need to us the von Mangoldt function {\Lambda}.

Unfortunately, {\Lambda} is biased (e.g. “all decent primes are odd”). To get around this, we let {w = w(N)} tend to infinity slowly with {N}, and define

\displaystyle W = \prod_{p \le w} p.

In the {W}-trick we consider only primes {1 \pmod W}. The modified von Mangoldt function then is defined by

\displaystyle \widetilde \Lambda(n) = \begin{cases} \frac{\varphi(W)}{W} \log (Wn+1) & Wn+1 \text{ prime} \\ 0 & \text{else}. \end{cases}

In accordance with Dirichlet, we have {\sum_{n \le N} \widetilde \Lambda(n) = N + o(N)}.

So, we need to show now that

Proposition 13

Fix {k \ge 3}. We can find {\delta = \delta(k) > 0} such that for {N \gg 1} prime, we can find {\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}} which satisfies the {k}-linear forms condition as well as

\displaystyle \nu(n) \ge \delta \widetilde \Lambda(n)

for {N/2 \le n < N}.

In that case, we can let

\displaystyle f(n) = \begin{cases} \delta \widetilde\Lambda(n) & N/2 \le n < N \\ 0 & \text{else}. \end{cases}

Then {0 \le f \le \nu}. The presence of {N/2 \le n < N} allows us to avoid “wrap-around issues” that arise from using {\mathbb Z/N} instead of {\mathbb Z}. Relative Szemerédi then yields the result.

For completeness, we state the construction. Let {\chi : \mathbb R \rightarrow [0,1]} be supported on {[-1,1]} with {\chi(0) = 1}, and define a normalizing constant {c_\chi = \int_0^\infty \left\lvert \chi'(x) \right\rvert^2 \; dx}. Inspired by {\Lambda(n) = \sum_{d \mid n} \mu(d) \log(n/d)}, we define a truncated {\Lambda} by

\displaystyle \Lambda_{\chi, R}(n) = \log R \sum_{d \mid n} \mu(d) \chi\left( \frac{\log d}{\log R} \right).

Let {k \ge 3}, {R = N^{k^{-1} 2^{-k-3}}}. Now, we define {\nu} by

\displaystyle \nu(n) = \begin{cases} \dfrac{\varphi(W)}{W} \dfrac{\Lambda_{\chi,R}(Wn+1)^2}{c_\chi \log R} & N/2 \le n < N \\ 0 & \text{else}. \end{cases}

This turns out to work, provided {w} grows sufficiently slowly in {N}.

Formal vs Functional Series (OR: Generating Function Voodoo Magic)

Epistemic status: highly dubious. I found almost no literature doing anything quite like what follows, which unsettles me because it makes it likely that I’m overcomplicating things significantly.

1. Synopsis

Recently I was working on an elegant problem which was the original problem 6 for the 2015 International Math Olympiad, which reads as follows:

Problem

[IMO Shortlist 2015 Problem C6] Let {S} be a nonempty set of positive integers. We say that a positive integer {n} is clean if it has a unique representation as a sum of an odd number of distinct elements from {S}. Prove that there exist infinitely many positive integers that are not clean.

Proceeding by contradiction, one can prove (try it!) that in fact all sufficiently large integers have exactly one representation as a sum of an even subset of {S}. Then, the problem reduces to the following:

Problem

Show that if {s_1 < s_2 < \dots} is an increasing sequence of positive integers and {P(x)} is a nonzero polynomial then we cannot have

\displaystyle \prod_{j=1}^\infty (1 - x^{s_j}) = P(x)

as formal series.

To see this, note that all sufficiently large {x^N} have coefficient {1 + (-1) = 0}. Now, the intuitive idea is obvious: the root {1} appears with finite multiplicity in {P} so we can put {P(x) = (1-x)^k Q(x)} where {Q(1) \neq 0}, and then we get that {1-x} on the RHS divides {P} too many times, right?

Well, there are some obvious issues with this “proof”: for example, consider the equality

\displaystyle 1 = (1-x)(1+x)(1+x^2)(1+x^4)(1+x^8) \dots.

The right-hand side is “divisible” by {1-x}, but the left-hand side is not (as a polynomial).

But we still want to use the idea of plugging {x \rightarrow 1^-}, so what is the right thing to do? It turns out that this is a complete minefield, and there are a lot of very subtle distinctions that seem to not be explicitly mentioned in many places. I think I have a complete answer now, but it’s long enough to warrant this entire blog post.

Here’s the short version: there’s actually two distinct notions of “generating function”, namely a “formal series” and “functional series”. They use exactly the same notation but are two different types of objects, and this ends up being the source of lots of errors, because “formal series” do not allow substituting {x}, while “functional series” do.

Spoiler: we’ll need the asymptotic for the partition function {p(n)}.

2. Formal Series {\neq} Functional Series

I’m assuming you’ve all heard the definition of {\sum_k c_kx^k}. It turns out unfortunately that this isn’t everything: there are actually two types of objects at play here. They are usually called formal power series and power series, but for this post I will use the more descriptive names formal series and functional series. I’ll do everything over {\mathbb C}, but one can of course use {\mathbb R} instead.

The formal series is easier to describe:

Definition 1

A formal series {F} is an infinite sequence {(a_n)_n = (a_0, a_1, a_2, \dots)} of complex numbers. We often denote it by {\sum a_nx^n = a_0 + a_1x + a_2x^2 + \dots}. The set of formal series is denoted {\mathbb C[ [x] ]}.

This is the “algebraic” viewpoint: it’s a sequence of coefficients. Note that there is no worry about convergence issues or “plugging in {x}”.

On the other hand, a functional series is more involved, because it has to support substitution of values of {x} and worry about convergence issues. So here are the necessary pieces of data:

Definition 2

A functional series {G} (centered at zero) is a function {G : U \rightarrow \mathbb C}, where {U} is an open disk centered at {0} or {U = \mathbb C}. We require that there exists an infinite sequence {(c_0, c_1, c_2, \dots)} of complex numbers satisfying

\displaystyle \forall z \in U: \qquad G(z) = \lim_{N \rightarrow \infty} \left( \sum_{k=0}^N c_k z^k \right).

(The limit is take in the usual metric of {\mathbb C}.) In that case, the {c_i} are unique and called the coefficients of {G}.

This is often written as {G(x) = \sum_n c_n x^n}, with the open set {U} suppressed.

Remark 3

Some remarks on the definition of functional series:

  • This is enough to imply that {G} is holomorphic (and thus analytic) on {U}.
  • For experts: note that I’m including the domain {U} as part of the data required to specify {G}, which makes the presentation cleaner. Most sources do something with “radius of convergence”; I will blissfully ignore this, leaving this data implicitly captured by {U}.
  • For experts: Perhaps non-standard, {U \neq \{0\}}. Otherwise I can’t take derivatives, etc.

Thus formal and functional series, despite having the same notation, have different types: a formal series {F} is a sequence, while a functional series {G} is a function that happens to be expressible as an infinite sum within its domain.

Of course, from every functional series {G} we can extract its coefficients and make them into a formal series {F}. So, for lack of better notation:

Definition 4

If {F = (a_n)_n} is a formal series, and {G : U \rightarrow \mathbb C} is a functional series whose coefficients equal {F}, then we write {F \simeq G}.

3. Finite operations

Now that we have formal and functional series, we can define sums. Since these are different types of objects, we will have to run definitions in parallel and then ideally check that they respect {\simeq}.

For formal series:

Definition 5

Let {F_1 = (a_n)_n} and {F_2 = (b_n)_n} be formal series. Then we set

\displaystyle \begin{aligned} (a_n)_n \pm (b_n)_n &= (a_n \pm b_n)_n \\ (a_n)_n \cdot (b_n)_n &= \left( \textstyle\sum_{j=0}^n a_jb_{n-j} \right)_n. \end{aligned}

This makes {\mathbb C[ [x] ]} into a ring, with identity {(0,0,0,\dots)} and {(1,0,0,\dots)}.

We also define the derivative {F = (a_n)_n} by {F' = ((n+1)a_{n+1})_n}.

It’s probably more intuitive to write these definitions as

\displaystyle \begin{aligned} \sum_n a_n x^n \pm \sum_n b_n x^n &= \sum_n (a_n \pm b_n) x^n \\ \left( \sum_n a_n x^n \right) \left( \sum_n b_n x^n \right) &= \sum_n \left( \sum_{j=0}^n a_jb_{n-j} \right) x^n \\ \left( \sum_n a_n x^n \right)' &= \sum_n na_n x^{n-1} \end{aligned}

and in what follows I’ll start to use {\sum_n a_nx^n} more. But officially, all definitions for formal series are in terms of the coefficients alone; these presence of {x} serves as motivation only.

Exercise 6

Show that if {F = \sum_n a_nx^n} is a formal series, then it has a multiplicative inverse if and only if {a_0 \neq 0}.

On the other hand, with functional series, the above operations are even simpler:

Definition 7

Let {G_1 : U \rightarrow \mathbb C} and {G_2 : U \rightarrow \mathbb C} be functional series with the same domain {U}. Then {G_1 \pm G_2} and {G_1 \cdot G_2} are defined pointwise.

If {G : U \rightarrow \mathbb C} is a functional series (hence holomorphic), then {G'} is defined poinwise.

If {G} is nonvanishing on {U}, then {1/G : U \rightarrow \mathbb C} is defined pointwise (and otherwise is not defined).

Now, for these finite operations, everything works as you expect:

Theorem 8 (Compatibility of finite operations)

Suppose {F}, {F_1}, {F_2} are formal series, and {G}, {G_1}, {G_2} are functional series {U \rightarrow \mathbb C}. Assume {F \simeq G}, {F_1 \simeq G_1}, {F_2 \simeq G_2}.

  • {F_1 \pm F_2 \simeq G_1 \pm G_2}, {F_1 \cdot F_2 = G_1 \cdot G_2}.
  • {F' \simeq G'}.
  • If {1/G} is defined, then {1/F} is defined and {1/F \simeq 1/G}.

So far so good: as long as we’re doing finite operations. But once we step beyond that, things begin to go haywire.

4. Limits

We need to start considering limits of {(F_k)_k} and {(G_k)_k}, since we are trying to make progress towards infinite sums and products. Once we do this, things start to burn.

Definition 9

Let {F_1 = \sum_n a_n x^n} and {F_2 = \sum_n b_n x^n} be formal series, and define the difference by

\displaystyle d(F_1, F_2) = \begin{cases} 2^{-n} & a_n \neq b_n, \; n \text{ minimal} \\ 0 & F_1 = F_2. \end{cases}

This function makes {\mathbb C[[x]]} into a metric space, so we can discuss limits in this space. Actually, it is a normed vector space obtained by {\left\lVert F \right\rVert = d(F,0)} above.

Thus, {\lim_{k \rightarrow \infty} F_k = F} if each coefficient of {x^n} eventually stabilizes as {k \rightarrow \infty}. For example, as formal series we have that {(1,-1,0,0,\dots)}, {(1,0,-1,0,\dots)}, {(1,0,0,-1,\dots)} converges to {1 = (1,0,0,0\dots)}, which we write as

\displaystyle \lim_{k \rightarrow \infty} (1 - x^k) = 1 \qquad \text{as formal series}.

As for functional series, since they are functions on the same open set {U}, we can use pointwise convergence or the stronger uniform convergence; we’ll say explicitly which one we’re doing.

Example 10 (Limits don’t work at all)

In what follows, {F_k \simeq G_k} for every {k}.

  • Here is an example showing that if {\lim_k F_k = F}, the functions {G_k} may not converge even pointwise. Indeed, just take {F_k = 1 - x^k} as before, and let {U = \{ z : |z| < 2 \}}.
  • Here is an example showing that even if {G_k \rightarrow G} uniformly, {\lim_k F_k} may not exist. Take {G_k = 1 - 1/k} as constant functions. Then {G_k \rightarrow 1}, but {\lim_k F_k} doesn’t exist because the constant term never stabilizes (in the combinatorial sense).
  • The following example from this math.SE answer by Robert Israel shows that it’s possible that {F = \lim_k F_k} exists, and {G_k \rightarrow G} pointwise, and still {F \not\simeq G}. Let {U} be the open unit disk, and set

    \displaystyle \begin{aligned} A_k &= \{z = r e^{i\theta} \mid 2/k \le r \le 1, \; 0 \le \theta \le 2\pi - 1/k\} \\ B_k &= \left\{ |z| \le 1/k \right\} \end{aligned}

    for {k \ge 1}. By Runge theorem there’s a polynomial {p_k(z)} such that

    \displaystyle |p_k(z) - 1/z^{k}| < 1/k \text{ on } A_k \qquad \text{and} \qquad |p_k(z)| < 1/k \text{ on }B_k.

    Then

    \displaystyle G_k(z) = z^{k+1} p_k(z)

    is the desired counterexample (with {F_k} being the sequence of coefficients from {G}). Indeed by construction {\lim_k F_k = 0}, since {\left\lVert F_k \right\rVert \le 2^{-k}} for each {k}. Alas, {|g_k(z) - z| \le 2/k} for {z \in A_k \cup B_k}, so {G_k \rightarrow z} converges pointwise to the identity function.

To be fair, we do have the following saving grace:

Theorem 11 (Uniform convergence and both limits exist is sufficient)

Suppose that {G_k \rightarrow G} converges uniformly. Then if {F_k \simeq G_k} for every {k}, and {\lim_k F_k = F}, then {F \simeq G}.

Proof: Here is a proof, copied from this math.SE answer by Joey Zhou. WLOG {G = 0}, and let {g_n(z) = \sum{a^{(n)}_kz^k}}. It suffices to show that {a_k = 0} for all {k}. Choose any {0<r<1}. By Cauchy’s integral formula, we have

\displaystyle \begin{aligned} \left|a_k - a^{(n)}_k\right| &= \left|\frac{1}{2\pi i} \int\limits_{|z|=r}{\frac{g(z)-g_n(z)}{z^{n+1}}\text{ d}z}\right| \\ & \le\frac{1}{2\pi}(2\pi r)\frac{1}{r^{n+1}}\max\limits_{|z|=r}{|g(z)-g_n(z)|} \xrightarrow{n\rightarrow\infty} 0 \end{aligned}

since {g_n} converges uniformly to {g} on {U}. Hence, {a_k = \lim\limits_{n\rightarrow\infty}{a^{(n)}_k}}. Since {a^{(n)}_k = 0} for {n\ge k}, the result follows. \Box

The take-away from this section is that limits are relatively poorly behaved.

5. Infinite sums and products

Naturally, infinite sums and products are defined by taking the limit of partial sums and limits. The following example (from math.SE again) shows the nuances of this behavior.

Example 12 (On {e^{1+x}})

The expression

\displaystyle \sum_{n=0}^\infty \frac{(1+x)^n}{n!} = \lim_{N \rightarrow \infty} \sum_{n=0}^N \frac{(1+x)^n}{n!}

does not make sense as a formal series: we observe that for every {N} the constant term of the partial sum changes.

But this does converge (uniformly, even) to a functional series on {U = \mathbb C}, namely to {e^{1+x}}.

Exercise 13

Let {(F_k)_{k \ge 1}} be formal series.

  • Show that an infinite sum {\sum_{k=1}^\infty F_k(x)} converges as formal series exactly when {\lim_k \left\lVert F_k \right\rVert = 0}.
  • Assume for convenience {F_k(0) = 1} for each {k}. Show that an infinite product {\prod_{k=0}^{\infty} (1+F_k)} converges as formal series exactly when {\lim_k \left\lVert F_k-1 \right\rVert = 0}.

Now the upshot is that one example of a convergent formal sum is the expression {\lim_{N} \sum_{n=0}^N a_nx^n} itself! This means we can use standard “radius of convergence” arguments to transfer a formal series into functional one.

Theorem 14 (Constructing {G} from {F})

Let {F = \sum a_nx^n} be a formal series and let

\displaystyle r = \frac{1}{\limsup_n \sqrt[n]{|c_n|}}.

If {r > 0} then there exists a functional series {G} on {U = \{ |z| < r \}} such that {F \simeq G}.

Proof: Let {F_k} and {G_k} be the corresponding partial sums of {c_0x^0} to {c_kx^k}. Then by Cauchy-Hadamard theorem, we have {G_k \rightarrow G} uniformly on (compact subsets of) {U}. Also, {\lim_k F_k = F} by construction. \Box

This works less well with products: for example we have

\displaystyle 1 \equiv (1-x) \prod_{j \ge 0} (1+x^{2^j})

as formal series, but we can’t “plug in {x=1}”, for example,

6. Finishing the original problem

We finally return to the original problem: we wish to show that the equality

\displaystyle P(x) = \prod_{j=1}^\infty (1 - x^{s_j})

cannot hold as formal series. We know that tacitly, this just means

\displaystyle \lim_{N \rightarrow \infty} \prod_{j=1}^N\left( 1 - x^{s_j} \right) = P(x)

as formal series.

Here is a solution obtained only by only considering coefficients, presented by Qiaochu Yuan from this MathOverflow question.

Both sides have constant coefficient {1}, so we may invert them; thus it suffices to show we cannot have

\displaystyle \frac{1}{P(x)} = \frac{1}{\prod_{j=1}^{\infty} (1 - x^{s_j})}

as formal power series.

The coefficients on the LHS have asymptotic growth a polynomial times an exponential.

On the other hand, the coefficients of the RHS can be shown to have growth both strictly larger than any polynomial (by truncating the product) and strictly smaller than any exponential (by comparing to the growth rate in the case where {s_j = j}, which gives the partition function {p(n)} mentioned before). So the two rates of growth can’t match.