# Circular optimization

This post will mostly be focused on construction-type problems in which you’re asked to construct something satisfying property ${P}$.

Minor spoilers for USAMO 2011/4, IMO 2014/5.

## 1. What is a leap of faith?

Usually, a good thing to do whenever you can is to make “safe moves” which are implied by the property ${P}$. Here’s a simple example.

Example 1 (USAMO 2011)

Find an integer ${n}$ such that the remainder when ${2^n}$ is divided by ${n}$ is odd.

It is easy to see, for example, that ${n}$ itself must be odd for this to be true, and so we can make our life easier without incurring any worries by restricting our search to odd ${n}$. You might therefore call this an “optimization”: a kind of move that makes the problem easier, essentially for free.

But often times such “safe moves” or not enough to solve the problem, and you have to eventually make “leap-of-faith moves”. For example, maybe in the above problem, we might try to focus our attention on numbers ${n = pq}$ for primes ${p}$ and ${q}$. This does make our life easier, because we’ve zoomed in on a special type of ${n}$ which is easy to compute. But it runs the risk that maybe there is no such example of ${n}$, or that the smallest one is difficult to find.

## 2. Circular reasoning can sometimes save the day

However, a strange type of circular reasoning can sometimes happen, in which a move that would otherwise be a leap-of-faith is actually known to be safe because you also know that the problem statement you are trying to prove is true. I can hardly do better than to give the most famous example:

Example 2 (IMO 2014)

For every positive integer ${n}$, the Bank of Cape Town issues coins of denomination ${\frac 1n}$. Given a finite collection of such coins (of not necessarily different denominations) with total value at most ${99 + \frac12}$, prove that it is possible to split this collection into ${100}$ or fewer groups, such that each group has total value at most ${1}$.

Let’s say in this problem we find ourselves holding two coins of weight ${1/6}$. Perhaps we wish to put these coins in the same group, so that we have one less decision to make. However, this could rightly be viewed as a “leap-of-faith”, because there’s no logical reason why the task must remain possible after making this first move.

Except there is a non-logical reason: this is the same as trading the two coins of weight ${1/6}$ for a single coin of weight ${1/3}$. Why is the task still possible? Because the problem says so: the very problem we are trying to solve includes this case, too. If the problem is going to be true, then it had better be true after we make this trade.

Thus by a perverse circular reasoning we can rest assured that our leap-of-faith here will not come back to bite us. (And in fact, this optimization is a major step of the solution.)

## 3. More examples of circular optimization

Here’s some more examples of problems you can try that I think have a similar idea.

Problem 1

Prove that in any connected graph ${G}$ on ${2004}$ vertices one can delete some edges to obtain a graph (also with ${2004}$ vertices) whose degrees are all odd.

Problem 2 (USA TST 2017)

In a sports league, each team uses a set of at most ${t}$ signature colors. A set ${S}$ of teams is color-identifiable if one can assign each team in ${S}$ one of their signature colors, such that no team in ${S}$ is assigned any signature color of a different team in ${S}$. For all positive integers ${n}$ and ${t}$, determine the maximum integer ${g(n,t)}$ such that: In any sports league with exactly ${n}$ distinct colors present over all teams, one can always find a color-identifiable set of size at least ${g(n,t)}$.

Feel free to post more examples in the comments.

# A story of block-ascending permutations

I recently had a combinatorics paper appear in the EJC. In this post I want to brag a bit by telling the “story” of this paper: what motivated it, how I found the conjecture that I originally did, and the process that eventually led me to the proof, and so on.

This work was part of the Duluth REU 2017, and I thank Joe Gallian for suggesting the problem.

## 1. Background

Let me begin by formulating the problem as it was given to me. First, here is the definition and notation for a “block-ascending” permutation.

Definition 1

For nonnegative integers ${a_1}$, …, ${a_n}$ an ${(a_1, \dots, a_n)}$-ascending permutation is a permutation on ${\{1, 2, \dots, a_1 + \dots + a_n\}}$ whose descent set is contained in ${\{a_1, a_1+a_2, \dots, a_1+\dots+a_{n-1}\}}$. In other words the permutation ascends in blocks of length ${a_1}$, ${a_2}$, …, ${a_n}$, and thus has the form

$\displaystyle \pi = \pi_{11} \dots \pi_{1a_1} | \pi_{21} \dots \pi_{2a_2} | \dots | \pi_{n1} \dots \pi_{na_n}$

for which ${\pi_{i1} < \pi_{i2} < \dots < \pi_{ia_i}}$ for all ${i}$.

It turns out that block-ascending permutations which also avoid an increasing subsequence of certain length have nice enumerative properties. To this end, we define the following notation.

Definition 2

Let ${\mathcal L_{k+2}(a_1, \dots, a_n)}$ denote the set of ${(a_1, \dots, a_n)}$-ascending permutations which avoid the pattern ${12 \dots (k+2)}$.

(The reason for using ${k+2}$ will be explained later.) In particular, ${\mathcal L_{k+2}(a_1 ,\dots, a_n) = \varnothing}$ if ${\max \{a_1, \dots, a_n\} \ge k+2}$.

Example 3

Here is a picture of a permutation in ${\mathcal L_7(3,2,4)}$ (but not in ${\mathcal L_6(3,2,4)}$, since one can see an increasing length ${6}$ subsequence shaded). We would denote it ${134|69|2578}$.

Now on to the results. A 2011 paper by Joel Brewster Lewis (JBL) proved (among other things) the following result:

Theorem 4 (Lewis 2011)

The sets ${\mathcal L_{k+2}(k,k,\dots,k)}$ and ${\mathcal L_{k+2}(k+1,k+1,\dots,k+1)}$ are in bijection with Young tableau of shape ${\left< (k+1)^n \right>}$.

Remark 5

When ${k=1}$, this implies ${\mathcal L_3(1,1,\dots,1)}$, which is the set of ${123}$-avoiding permutations of length ${n}$, is in bijection with the Catalan numbers; so is ${\mathcal L_3(2,\dots,2)}$ which is the set of ${123}$-avoiding zig-zag permutations.

Just before the Duluth REU in 2017, Mei and Wang proved that in fact, in Lewis’ result one may freely mix ${k}$ and ${k+1}$‘s. To simplify notation,

Definition 6

Let ${I \subseteq \left\{ 1,\dots,n \right\}}$. Then ${\mathcal L(n,k,I)}$ denotes ${\mathcal L_{k+2}(a_1,\dots,a_n)}$ where

$\displaystyle a_i = \begin{cases} k+1 & i \in I \\ k & i \notin I. \end{cases}$

Theorem 7 (Mei, Wang 2017)

The ${2^n}$ sets ${\mathcal L(n,k,I)}$ are also in bijection with Young tableau of shape ${\left< (k+1)^n \right>}$.

The proof uses the RSK correspondence, but the authors posed at the end of the paper the following open problem:

Problem

Find a direct bijection between the ${2^n}$ sets ${\mathcal L(n,k,I)}$ above, not involving the RSK correspondence.

This was the first problem that I was asked to work on. (I remember I received the problem on Sunday morning; this actually matters a bit for the narrative later.)

At this point I should pause to mention that this ${\mathcal L_{k+2}(\dots)}$ notation is my own invention, and did not exist when I originally started working on the problem. Indeed, all the results are restricted to the case where ${a_i \in \{k,k+1\}}$ for each ${i}$, and so it was unnecessary to think about other possibilities for ${a_i}$: Mei and Wang’s paper use the notation ${\mathcal L(n,k,I)}$. So while I’ll continue to use the ${\mathcal L_{k+2}(\dots)}$ notation in the blog post for readability, it will make some of the steps more obvious than they actually were.

## 2. Setting out

Mei and Wang’s paper originally suggested that rather than finding a bijection ${\mathcal L(n,k,I) \rightarrow \mathcal L(n,k,J)}$ for any ${I}$ and ${J}$, it would suffice to biject

$\displaystyle \mathcal L(n,k,I) \rightarrow \mathcal L(n,k,\varnothing)$

and then compose two such bijections. I didn’t see why this should be much easier, but it didn’t seem to hurt either.

As an example, they show how to do this bijection with ${I = \{1\}}$ and ${I = \{n\}}$. Indeed, suppose ${I = \{1\}}$. Then ${\pi_{11} < \pi_{12} < \dots < \pi_{1(k+1)}}$ is an increasing sequence of length ${k+1}$ right at the start of ${\pi}$. So ${\pi_{1(k+1)}}$ had better be the largest element in the permutation: otherwise later in ${\pi}$ the biggest element would complete an ascending permutation of length ${k+2}$ already! So removing ${\pi_{1(k+1)}}$ gives a bijection between ${\mathcal L(n,k,\{1\}) \rightarrow \mathcal L(n,k,\varnothing)}$.

But if you look carefully, this proof does essentially nothing with the later blocks. The exact same proof gives:

Proposition 8

Suppose ${1 \notin I}$. Then there is a bijection

$\displaystyle \mathcal L(n,k,I \cup \{1\}) \rightarrow \mathcal L(n,k,I)$

by deleting the ${(k+1)}$st element of the permutation (which must be largest one).

Once I found this proposition I rejected the initial suggestion of specializing ${\mathcal L(n,k,I) \rightarrow \mathcal L(n,k,\varnothing)}$. The “easy case” I had found told me that I could take a set ${I}$ and delete the single element ${1}$ from it. So empirically, my intuition from this toy example told me that it would be easier to find bijections ${\mathcal L(n,k,I) \rightarrow \mathcal L(n,k,I')}$ whee ${I'}$ and ${I}$ were only “a little different”, and hope that the resulting bijection only changed things a little bit (in the same way that in the toy example, all the bijection did was delete one element). So I shifted to trying to find small changes of this form.

## 3. The fork in the road

### 3.1. Wishful thinking

I had a lucky break of wishful thinking here. In the notation ${\mathcal L_{k+2}(a_1, \dots, a_n)}$ with ${a_i \in \{k,k+1\}}$, I had found that one could replace ${a_1}$ with either ${k}$ or ${k+1}$ freely. (But this proof relied heavily on the fact the block really being on the far left.) So what other changes might I be able to make?

There were two immediate possibilities that came to my mind.

• Deletion: We already showed ${a_1}$ could be changed from ${k+1}$ to ${k}$ for any ${i}$. If we can do a similar deletion with ${a_i}$ for any ${i}$, not just ${i=1}$, then we would be done.
• Swapping: If we can show that two adjacent ${a_i}$‘s could be swapped, that would be sufficient as well. (It’s also possible to swap non-adjacent ${a_i}$‘s, but that would cause more disruption for no extra benefit.)

Now, I had two paths that both seemed plausible to chase after. How was I supposed to know which one to pick? (Of course, it’s possible neither work, but you have to start somewhere.)

Well, maybe the correct thing to do would have to just try both. But it was Sunday afternoon by the time I got to this point. Granted, it was summer already, but I knew that come Monday I would have doctor appointments and other trivial errands to distract me, so I decided I should pick one of them and throw the rest of the day into it. But that meant I had to pick one.

(I confess that I actually already had a prior guess: the deletion approach seemed less likely to work than the swapping approach. In the deletion approach, if ${i}$ is somewhere in the middle of the permutation, it seemed like deleting an element could cause a lot of disruption. But the swapping approach preserved the total number of elements involved, and so seemed more likely that I could preserve structure. But really I was just grasping at straws.)

### 3.2. Enter C++

Yeah, I cheated. Sorry.

Those of you that know anything about my style of math know that I am an algebraist by nature — sort of. It’s more accurate to say that I depend on having concrete examples to function. True, I can’t do complexity theory for my life, but I also haven’t been able to get the hang of algebraic geometry, despite having tried to learn it three or four times by now. But enumerative combinatorics? OH LOOK EXAMPLES.

Here’s the plan: let ${k=3}$. Then using a C++ computer program:

• Enumerate all the permutations in ${S = \mathcal L_{k+2}(3,4,3,4)}$.
• Enumerate all the permutations in ${A = \mathcal L_{k+2}(3,3,3,4)}$.
• Enumerate all the permutations in ${B = \mathcal L_{k+2}(3,3,4,4)}$.

If the deletion approach is right, then I would hope ${S}$ and ${A}$ look pretty similar. On the flip side, if the swapping approach is right, then ${S}$ and ${B}$ should look close to each other instead.

It’s moments like this where my style of math really shines. I don’t have to make decisions like the above off gut-feeling: do the “data science” instead.

### 3.3. A twist of fate

Except this isn’t actually what I did, since there was one problem. Computing the longest increasing subsequence of a length ${N}$ permutation takes ${O(N \log N)}$ time, and there are ${N!}$ or so permutations. But when ${N = 3+4+3+4=14}$, we have ${N! \cdot N \log N \approx 3 \cdot 10^{12}}$, which is a pretty big number. Unfortunately, my computer is not really that fast, and I didn’t really have the patience to implement the “correct” algorithms to bring the runtime down.

The solution? Use ${N = 1+4+3+2 = 10}$ instead.

In a deep irony that I didn’t realize at the time, it was this moment when I introduced the ${\mathcal L_{k+2}(a_1, \dots, a_n)}$ notation, and for the first time allowed the ${a_i}$ to not be in ${\{k,k+1\}}$. My reasoning was that since I was only doing this for heuristic reasons, I could instead work with ${S = \mathcal L_{k+2}(2,4,3,2)}$ and probably not change much about the structure of the problem, while replacing ${N = 2 + 4 + 3 + 2 = 11}$, which would run ${1000}$ times faster. This was okay since all I wanted to do was see how much changing the “middle” would disrupt the structure.

And so the new plan was:

• Enumerate all the permutations in ${S = \mathcal L_{k+2}(1,4,3,2)}$.
• Enumerate all the permutations in ${A = \mathcal L_{k+2}(1,3,3,2)}$.
• Enumerate all the permutations in ${B = \mathcal L_{k+2}(1,3,4,2)}$.

I admit I never actually ran the enumeration with ${A}$, because the route with ${S}$ and ${B}$ turned out to be even more promising than I expected. When I compared the empirical data for the sets ${S}$ and ${B}$, I found that the number of permutations with any particular triple ${(\pi_1, \pi_9, \pi_{10})}$ were equal. In other words, the outer blocks were preserved: the bijection

$\displaystyle \mathcal L_{k+2}(1,4,3,2) \rightarrow \mathcal L_{k+2}(1,3,4,2)$

does not tamper with the outside blocks of length ${1}$ and ${2}$.

This meant I was ready to make the following conjecture. Suppose ${a_i = k}$, ${a_{i+1} = k+1}$. There is a bijection

$\displaystyle \mathcal L_{k+2}(a_1, \dots, a_i, a_{i+1}, \dots, a_n) \rightarrow \mathcal L_{k+2}(a_1, \dots, a_{i+1}, a_{i}, \dots, a_n)$

which only involves rearranging the elements of the ${i}$th and ${(i+1)}$st blocks.

## 4. Rooting out the bijection

At this point I was in a quite good position. I had pinned down the problem to a finding a particular bijection that I was confident had to exist, since it was showing up to the empirical detail.

Let’s call this mythical bijection ${\mathbf W}$. How could I figure out what it was?

### 4.1. Hunch: ${\mathbf W}$ preserves order-isomorphism

Let me quickly introduce a definition.

Definition 9

We say two words ${a_1 \dots a_m}$ and ${b_1 \dots b_m}$ are order-isomorphic if ${a_i < a_j}$ if and only ${b_i < b_j}$. Then order-isomorphism gives equivalence classes, and there is a canonical representative where the letters are ${\{1,2,\dots,m\}}$; this is called a reduced word.

Example 10

The words ${13957}$, ${12846}$ and ${12534}$ are order-isomorphic; the last is reduced.

Now I guessed one more property of ${\mathbf W}$: this ${\mathbf W}$ should order-isomorphism.

What do I mean by this? Suppose in one context ${139 | 57}$ changed to ${39 | 157}$; then we would expect that in another situation we should have ${124 | 68}$ changing to ${24 | 168}$. Indeed, we expect ${\mathbf W}$ (empirically) to not touch surrounding outside blocks, and so it would be very strange if ${\mathbf W}$ behaved differently due to far-away numbers it wasn’t even touching.

So actually I’ll just write

$\displaystyle \mathbf W(123|45) = 23|145$

for this example, reducing the words in question.

### 4.2. Keep cheating

With this hunch it’s possible to cheat with C++ again. Here’s how.

Let’s for concreteness suppose ${k=2}$ and the particular sets

$\displaystyle \mathcal L_{k+2}(1,3,2,1) \rightarrow \mathcal L_{k+2}(1,2,3,1).$

Well, it turns out if you look at the data:

• The only element of ${\mathcal L_{k+2}(1,3,2,1)}$ which starts with ${2}$ and ends with ${5}$ is ${2|147|36|5}$.
• The only element of ${\mathcal L_{k+2}(1,2,3,1)}$ which starts with ${2}$ and ends with ${5}$ is ${2|47|136|5}$.

So that means that ${147 | 36}$ is changed to ${47 | 136}$. Thus the empirical data shows that

$\displaystyle \mathbf W(135|24) = 35|124.$

In general, it might not be that clear cut. For example, if we look at the permutations starting with ${2}$ and ${4}$, there is more than one.

• ${2 | 1 5 7 | 3 6 | 4}$ and ${2 | 1 6 7 | 3 5 | 4}$ are both in ${\mathcal L_{k+2}(1,3,2,1)}$.
• ${2 | 5 7 | 1 3 6 | 4}$ and ${2 | 6 7 | 1 3 5 | 4}$ are both in in ${\mathcal L_{k+2}(1,2,3,1)}$.

Thus

$\displaystyle \mathbf W( \{135|24, 145|23\} ) = \{35|124, 45|123\}$

but we can’t tell which one goes to which (although you might be able to guess).

Fortunately, there is lots of data. This example narrowed ${135|24}$ down to two values, but if you look at other places you might have different data on ${135|24}$. Since we think ${\mathbf W}$ is behaving the same “globally”, we can piece together different pieces of data to get narrower sets. Even better, ${\mathbf W}$ is a bijection, so once we match either of ${135|24}$ or ${145|23}$, we’ve matched the other.

You know what this sounds like? Perfect matchings.

So here’s the experimental procedure.

• Enumerate all permutations in ${\mathcal L_{k+2}(2,3,4,2)}$ and ${\mathcal L_{k+2}(2,4,3,2)}$.
• Take each possible tuple ${(\pi_1, \pi_2, \pi_{10}, \pi_{11})}$, and look at the permutations that start and end with those particular four elements. Record the reductions of ${\pi_3\pi_4\pi_5|\pi_6\pi_7\pi_8\pi_9}$ and ${\pi_3\pi_4\pi_5\pi_6|\pi_7\pi_8\pi_9}$ for all these permutations. We call these input words and output words, respectively. Each output word is a “candidate” of ${\mathbf W}$ for a input word.
• For each input word ${a_1a_2a_3|b_1b_2b_3b_4}$ that appeared, take the intersection of all output words that appeared. This gives a bipartite graph ${G}$, with input words being matched to their candidates.
• Find perfect matchings of the graph.

And with any luck that would tell us what ${\mathbf W}$ is.

### 4.3. Results

Luckily, the bipartite graph is quite sparse, and there was only one perfect matching.

246|1357 => 2467|135
247|1356 => 2457|136
256|1347 => 2567|134
257|1346 => 2357|146
267|1345 => 2367|145
346|1257 => 3467|125
347|1256 => 3457|126
356|1247 => 3567|124
357|1246 => 1357|246
367|1245 => 1367|245
456|1237 => 4567|123
457|1236 => 1457|236
467|1235 => 1467|235
567|1234 => 1567|234


If you look at the data, well, there are some clear patterns. Exactly one number is “moving” over from the right half, each time. Also, if ${7}$ is on the right half, then it always moves over.

Anyways, if you stare at this for an hour, you can actually figure out the exact rule:

Claim 11

Given an input ${a_1a_2a_3|b_1b_2b_3b_4}$, move ${b_{i+1}}$ if ${i}$ is the largest index for which ${a_i < b_{i+1}}$, or ${b_1 = 1}$ if no such index exists.

And indeed, once I have this bijection, it takes maybe only another hour of thinking to verify that this bijection works as advertised, thus solving the original problem.

Rather than writing up what I had found, I celebrated that Sunday evening by playing Wesnoth for 2.5 hours.

## 5. Generalization

### 5.1. Surprise

On Monday morning I was mindlessly feeding inputs to the program I had worked on earlier and finally noticed that in fact ${\mathcal L_6(1,3,5,2)}$ and ${\mathcal L_6(1,5,3,2)}$ also had the same cardinality. Huh.

It seemed too good to be true, but I played around some more, and sure enough, the cardinality of ${\#\mathcal L_{k+2}(a_1, \dots, a_n)}$ seemed to only depend on the order of the ${a_i}$‘s. And so at last I stumbled upon the final form the conjecture, realizing that all along the assumption ${a_i \in \{k,k+1\}}$ that I had been working with was a red herring, and that the bijection was really true in much vaster generality. There is a bijection

$\displaystyle \mathcal L_{k+2}(a_1, \dots, a_i, a_{i+1}, \dots, a_n) \rightarrow \mathcal L_{k+2}(a_1, \dots, a_{i+1}, a_{i}, \dots, a_n)$

which only involves rearranging the elements of the ${i}$th and ${(i+1)}$st blocks.

It also meant I had more work to do, and so I was now glad that I hadn’t written up my work from yesterday night.

### 5.2. More data science

I re-ran the experiment I had done before, now with ${\mathcal L_7(2,3,5,2) \rightarrow \mathcal L_7(2,5,3,2)}$. (This was interesting, because the ${8}$ elements in question could now have either longest increasing subsequence of length ${5}$, or instead of length ${6}$.)

The data I obtained was:

246|13578 => 24678|135
247|13568 => 24578|136
248|13567 => 24568|137
256|13478 => 25678|134
257|13468 => 23578|146
258|13467 => 23568|147
267|13458 => 23678|145
268|13457 => 23468|157
278|13456 => 23478|156
346|12578 => 34678|125
347|12568 => 34578|126
348|12567 => 34568|127
356|12478 => 35678|124
357|12468 => 13578|246
358|12467 => 13568|247
367|12458 => 13678|245
368|12457 => 13468|257
378|12456 => 13478|256
456|12378 => 45678|123
457|12368 => 14578|236
458|12367 => 14568|237
467|12358 => 14678|235
468|12357 => 12468|357
478|12356 => 12478|356
567|12348 => 15678|234
568|12347 => 12568|347
578|12346 => 12578|346
678|12345 => 12678|345


Okay, so it looks like:

• exactly two numbers are moving each time, and
• the length of the longest run is preserved.

Eventually, I was able to work out the details, but they’re more involved than I want to reproduce here. But the idea is that you can move elements “one at a time”: something like

$\displaystyle \mathcal L_{k+2}(7,4) \rightarrow \mathcal L_{k+2}(6,5) \rightarrow \mathcal L_{k+2}(5,6) \rightarrow \mathcal L_{k+2}(4,7)$

while preserving the length of increasing subsequences at each step.

So, together with the easy observation from the beginning, this not only resolves the original problem, but also gives an elegant generalization. I had now proved:

Theorem 12

For any ${a_1}$, …, ${a_n}$, the cardinality

$\displaystyle \# \mathcal L_{k+2}(a_1, \dots, a_n)$

does not depend on the order of the ${a_i}$‘s.

## 6. Discovered vs invented

Whenever I look back on this, I can’t help thinking just how incredibly lucky I got on this project.

There’s this perpetual debate about whether mathematics is discovered or invented. I think it’s results like this which make the case for “discovered”. I did not really construct the bijection ${\mathbf W}$ myself: it was “already there” and I found it by examining the data. In another world where ${\mathbf W}$ did not exist, all the creativity in the world wouldn’t have changed anything.

So anyways, that’s the behind-the-scenes tour of my favorite combinatorics paper.

# Joyal’s Proof of Cayley’s Tree Formula

I wanted to quickly write this proof up, complete with pictures, so that I won’t forget it again. In this post I’ll give a combinatorial proof (due to Joyal) of the following:

Theorem 1 (Cayley’s Formula)

The number of trees on ${n}$ labelled vertices is ${n^{n-2}}$.

Proof: We are going to construct a bijection between

• Functions ${\{1, 2, \dots, n\} \rightarrow \{1, 2, \dots, n\}}$ (of which there are ${n^n}$) and
• Trees on ${\{1, 2, \dots, n\}}$ with two distinguished nodes ${A}$ and ${B}$ (possibly ${A=B}$).

Let’s look at the first piece of data. We can visualize it as ${n}$ points floating around, each with an arrow going out of it pointing to another point, but possibly with many other arrows coming into it. Such a structure is apparently called a directed pseudoforest. Here is an example when ${n = 9}$.

You’ll notice that in each component, some of the points lie in a cycle and others do not. I’ve colored the former type of points blue, and the corresponding arrows magenta.

Thus a directed pseudoforest can also be specified by

• a choice of some vertices to be in cycles (blue vertices),
• a permutation on the blue vertices (magenta arrows), and
• attachments of trees to the blue vertices (grey vertices and arrows).

Now suppose we take the same information, but replace the permutation on the blue vertices with a total ordering instead (of course there are an equal number of these). Then we can string the blue vertices together as shown below, where the green arrows denote the selected total ordering (in this case ${1 < 9 < 2 < 4 < 8 < 5}$):

This is exactly the data of a tree on the ${n}$ vertices with two distinguished vertices, the first and last in the chain of green (which could possibly coincide). $\Box$

I’m reading through Primes of the Form ${x^2+ny^2}$, by David Cox (link; it’s good!). Here are the high-level notes I took on the first chapter, which is about the theory of quadratic forms.

(Meta point re blog: I’m probably going to start posting more and more of these more high-level notes/sketches on this blog on topics that I’ve been just learning. Up til now I’ve been mostly only posting things that I understand well and for which I have a very polished exposition. But the perfect is the enemy of the good here; given that I’m taking these notes for my own sake, I may as well share them to help others.)

## 1. Overview

Definition 1

For us a quadratic form is a polynomial ${Q = Q(x,y) = ax^2 + bxy + cy^2}$, where ${a}$, ${b}$, ${c}$ are some integers. We say that it is primitive if ${\gcd(a,b,c) = 1}$.

For example, we have the famous quadratic form

$\displaystyle Q_{\text{Fermat}}(x,y) = x^2+y^2.$

As readers are probably aware, we can say a lot about exactly which integers can be represented by ${Q_{\text{Fermat}}}$: by Fermat’s Christmas theorem, the primes ${p \equiv 1 \pmod 4}$ (and ${p=2}$) can all be written as the sum of two squares, while the primes ${p \equiv 3 \pmod 4}$ cannot. For convenience, let us say that:

Definition 2

Let ${Q}$ be a quadratic form. We say it represents the integer ${m}$ if there exists ${x,y \in \mathbb Z}$ with ${m = Q(x,y)}$. Moreover, ${Q}$ properly represents ${m}$ if one can find such ${x}$ and ${y}$ which are also relatively prime.

The basic question is: what can we say about which primes/integers are properly represented by a quadratic form? In fact, we will later restrict our attention to “positive definite” forms (described later).

For example, Fermat’s Christmas theorem now rewrites as:

Theorem 3 (Fermat’s Christmas theorem for primes)

An odd prime ${p}$ is (properly) represented by ${Q_{\text{Fermat}}}$ if and only if ${p \equiv 1 \pmod 4}$.

The proof of this is classical, see for example my olympiad handout. We also have the formulation for odd integers:

Theorem 4 (Fermat’s Christmas theorem for odd integers)

An odd integer ${m}$ is properly represented by ${Q_{\text{Fermat}}}$ if and only if all prime factors of ${m}$ are ${1 \pmod 4}$.

Proof: For the “if” direction, we use the fact that ${Q_{\text{Fermat}}}$ is multiplicative in the sense that

$\displaystyle (x^2+y^2)(u^2+v^2) = (xu \pm yv)^2 + (xv \mp yu)^2.$

For the “only if” part we use the fact that if a multiple of a prime ${p}$ is properly represented by ${Q_{\text{Fermat}}}$, then so is ${p}$. This follows by noticing that if ${x^2+y^2 \equiv 0 \pmod p}$ (and ${xy \not\equiv 0 \pmod p}$) then ${(x/y)^2 \equiv -1 \pmod p}$. $\Box$
Tangential remark: the two ideas in the proof will grow up in the following way.

• The fact that ${Q_{\text{Fermat}}}$ “multiplies nicely” will grow up to become the so-called composition of quadratic forms.
• The second fact will not generalize for an arbitrary form ${Q}$. Instead, we will see that if a multiple of ${p}$ is represented by a form ${Q}$ then some form of the same “discriminant” will represent the prime ${p}$, but this form need not be the same as ${Q}$ itself.

## 2. Equivalence of forms, and the discriminant

The first thing we should do is figure out when two forms are essentially the same: for example, ${x^2+5y^2}$ and ${5x^2+y^2}$ should clearly be considered the same. More generally, if we think of ${Q}$ as acting on ${\mathbb Z^{\oplus 2}}$ and ${T}$ is any automorphism of ${\mathbb Z^{\oplus 2}}$, then ${Q \circ T}$ should be considered the same as ${Q}$. Specifically,

Definition 5

Two forms ${Q_1}$ and ${Q_2}$ said to be equivalent if there exists

$\displaystyle T = \begin{pmatrix} p & q \\ r & s \end{pmatrix} \in \text{GL }(2,\mathbb Z)$

such that ${Q_2(x,y) = Q_1(px+ry, qx+sy)}$. We have ${\det T = ps-qr = \pm 1}$ and so we say the equivalence is

• a proper equivalence if ${\det T = +1}$, and
• an improper equivalence if ${\det T = -1}$.

So we generally will only care about forms up to proper equivalence. (It will be useful to distinguish between proper/improper equivalence later.)

Naturally we seek some invariants under this operation. By far the most important is:

Definition 6

The discriminant of a quadratic form ${Q = ax^2 + bxy + cy^2}$ is defined as

$\displaystyle D = b^2-4ac.$

The discriminant is invariant under equivalence (check this). Note also that we also have ${D \equiv 0 , 1 \pmod 4}$.

Observe that we have

$\displaystyle 4a \cdot (ax^2+bxy+cy^2) = (2ax + by)^2 - Dy^2.$

So if ${D < 0}$ and ${a > 0}$ (thus ${c > 0}$ too) then ${ax^2+bxy+cy^2 > 0}$ for all ${x,y > 0}$. Such quadratic forms are called positive definite, and we will restrict our attention to these forms.

Now that we have this invariant, we may as well classify equivalence classes of quadratic forms for a fixed discriminant. It turns out this can be done explicitly.

Definition 7

A quadratic form ${Q = ax^2 + bxy + cy^2}$ is reduced if

• it is primitive and positive definite,
• ${|b| \le a \le c}$, and
• ${b \ge 0}$ if either ${|b| = a}$ or ${a = c}$.

Exercise 8

Check there only finitely many reduced forms of a fixed discriminant.

Then the big huge theorem is:

Theorem 9 (Reduced forms give a set of representatives)

Every primitive positive definite form ${Q}$ of discriminant is properly equivalent to a unique reduced form. We call this the reduction of ${Q}$.

Proof: Omitted due to length, but completely elementary. It is a reduction argument with some number of cases. $\Box$

Thus, for any discriminant ${D}$ we can consider the set

$\displaystyle \text{Cl}(D) = \left\{ \text{reduced forms of discriminant } D \right\}$

which will be the equivalence classes of positive definite of discriminant ${D}$. By abuse of notation we will also consider it as the set of equivalence classes of primitive positive definite forms of discriminant ${D}$.

We also define ${h(D) = \left\lvert \text{Cl}(D) \right\rvert}$; by the exercise, ${h(D) < \infty}$. This is called the class number.

Moreover, we have ${h(D) \ge 1}$, because we can take ${x^2 - D/4 y^2}$ for ${D \equiv 0 \pmod 4}$ and ${x^2 + xy + (1-D)/4 y^2}$ for ${D \equiv 1 \pmod 4}$. We call this form the principal form.

## 3. Tables of quadratic forms

Example 10 (Examples of quadratic forms with ${h(D) = 1}$, ${D \equiv 0 \pmod 4}$)

The following discriminants have class number ${h(D) = 1}$, hence having only the principal form:

• ${D = -4}$, with form ${x^2 + y^2}$.
• ${D = -8}$, with form ${x^2 + 2y^2}$.
• ${D = -12}$, with form ${x^2+3y^2}$.
• ${D = -16}$, with form ${x^2 + 4y^2}$.
• ${D = -28}$, with form ${x^2 + 7y^2}$.

This is in fact the complete list when ${D \equiv 0 \pmod 4}$.

Example 11 (Examples of quadratic forms with ${h(D) = 1}$, ${D \equiv 1 \pmod 4}$)

The following discriminants have class number ${h(D) = 1}$, hence having only the principal form:

• ${D = -3}$, with form ${x^2 + xy + y^2}$.
• ${D = -7}$, with form ${x^2 + xy + 2y^2}$.
• ${D = -11}$, with form ${x^2 + xy + 3y^2}$.
• ${D = -19}$, with form ${x^2 + xy + 5y^2}$.
• ${D = -27}$, with form ${x^2 + xy + 7y^2}$.
• ${D = -43}$, with form ${x^2 + xy + 11y^2}$.
• ${D = -67}$, with form ${x^2 + xy + 17y^2}$.
• ${D = -163}$, with form ${x^2 + xy + 41y^2}$.

This is in fact the complete list when ${D \equiv 1 \pmod 4}$.

Example 12 (More examples of quadratic forms)

Here are tables for small discriminants with ${h(D) > 1}$. When ${D \equiv 0 \pmod 4}$ we have

• ${D = -20}$, with ${h(D) = 2}$ forms ${2x^2 + 2xy + 3y^2}$ and ${x^2 + 5y^2}$.
• ${D = -24}$, with ${h(D) = 2}$ forms ${2x^2 + 3y^2}$ and ${x^2 + 6y^2}$.
• ${D = -32}$, with ${h(D) = 2}$ forms ${3x^2 + 2xy + 3y^2}$ and ${x^2 + 8y^2}$.
• ${D = -36}$, with ${h(D) = 2}$ forms ${2x^2 + 2xy + 5y^2}$ and ${x^2 + 9y^2}$.
• ${D = -40}$, with ${h(D) = 2}$ forms ${2x^2 + 5y^2}$ and ${x^2 + 10y^2}$.
• ${D = -44}$, with ${h(D) = 3}$ forms ${3x^2 \pm 2xy + 4y^2}$ and ${x^2 + 11y^2}$.

As for ${D \equiv 1 \pmod 4}$ we have

• ${D = -15}$, with ${h(D) = 2}$ forms ${2x^2 + xy + 2y^2}$ and ${x^2 + xy + 4y^2}$.
• ${D = -23}$, with ${h(D) = 3}$ forms ${2x^2 \pm xy + 3y^2}$ and ${x^2+ xy + 6y^2}$.
• ${D = -31}$, with ${h(D) = 3}$ forms ${2x^2 \pm xy + 4}$ and ${x^2 + xy + 8y^2}$.
• ${D = -39}$, with ${h(D) = 4}$ forms ${3x^2 + 3xy + 4y^2}$, ${2x^2 \pm 2xy + 5y^2}$ and ${x^2 + xy + 10y^2}$.

Example 13 (Even More Examples of quadratic forms)

Here are some more selected examples:

• ${D = -56}$ has ${h(D) = 4}$ forms ${x^2+14y^2}$, ${2x^2+7y^2}$ and ${3x^2 \pm 2xy + 5y^2}$.
• ${D = -108}$ has ${h(D) = 3}$ forms ${x^2+27y^2}$ and ${4x^2 \pm 2xy + 7y^2}$.
• ${D = -256}$ has ${h(D) = 4}$ forms ${x^2+64y^2}$, ${4x^2+4xy+17y^2}$ and ${5x^2\pm2xy+13y^2}$.

## 4. The Character ${\chi_D}$

We can now connect this to primes ${p}$ as follows. Earlier we played with ${Q_{\text{Fermat}} = x^2+y^2}$, and observed that for odd primes ${p}$, ${p \equiv 1 \pmod 4}$ if and only if some multiple of ${p}$ is properly represented by ${Q_{\text{Fermat}}}$.

Our generalization is as follows:

Theorem 14 (Primes represented by some quadratic form)

Let ${D < 0}$ be a discriminant, and let ${p \nmid D}$ be an odd prime. Then the following are equivalent:

• ${\left( \frac Dp \right) = 1}$, i.e. ${D}$ is a quadratic residue modulo ${p}$.
• The prime ${p}$ is (properly) represented by some reduced quadratic form in ${\text{Cl}(D)}$.

This generalizes our result for ${Q_{\text{Fermat}}}$, but note that it uses ${h(-4) = 1}$ in an essential way! That is: if ${(-1/p) = 1}$, we know ${p}$ is represented by some quadratic form of discriminant ${D = -4}$\dots but only since ${h(-4) = 1}$ do we know that this form reduces to ${Q_{\text{Fermat}} = x^2+y^2}$.

Proof: First assume WLOG that ${p \nmid 4a}$ and ${Q(x,y) \equiv 0 \pmod p}$. Thus ${p \nmid y}$, since otherwise this would imply ${x \equiv y \equiv 0 \pmod p}$. Then

$\displaystyle 0 \equiv 4a \cdot Q(x,y) \equiv (2ax + by)^2 - Dy^2 \pmod p$

hence ${D \equiv \left( 2axy^{-1} + b \right)^2 \pmod p}$.

The converse direction is amusing: let ${m^2 = D + pk}$ for integers ${m}$, ${k}$. Consider the quadratic form

$\displaystyle Q(x,y) = px^2 + mxy + ky^2.$

It is primitive of discriminant ${D}$ and ${Q(1,0) = p}$. Now ${Q}$ may not be reduced, but that’s fine: just take the reduction of ${Q}$, which must also properly represent ${p}$. $\Box$

Thus to every discriminant ${D < 0}$ we can attach the Legendre character (is that the name?), which is a homomorphism

$\displaystyle \chi_D = \left( \tfrac{D}{\bullet} \right) : \left( \mathbb Z / D\mathbb Z \right)^\times \rightarrow \{ \pm 1 \}$

with the property that if ${p}$ is a rational prime not dividing ${D}$, then ${\chi_D(p) = \left( \frac{D}{p} \right)}$. This is abuse of notation since I should technically write ${\chi_D(p \pmod D)}$, but there is no harm done: one can check by quadratic reciprocity that if ${p \equiv q \pmod D}$ then ${\chi_D(p) = \chi_D(q)}$. Thus our previous result becomes:

Theorem 15 (${\ker(\chi_D)}$ consists of representable primes)

Let ${p \nmid D}$ be prime. Then ${p \in \ker(\chi_D)}$ if and only if some quadratic form in ${\text{Cl}(D)}$ represents ${p}$.

As a corollary of this, using the fact that ${h(-8) = h(-12) = h(-28) = 1}$ one can prove that

Corollary 16 (Fermat-type results for ${h(-4n) = 1}$)

Let ${p > 7}$ be a prime. Then ${p}$ is

• of the form ${x^2 + 2y^2}$ if and only if ${p \equiv 1, 3 \pmod 8}$.
• of the form ${x^2 + 3y^2}$ if and only if ${p \equiv 1 \pmod 3}$.
• of the form ${x^2 + 7y^2}$ if and only if ${p \equiv 1, 2, 4 \pmod 7}$.

Proof: The congruence conditions are equivalent to ${(-4n/p) = 1}$, and as before the only point is that the only reduced quadratic form for these ${D = -4n}$ is the principal one. $\Box$

## 5. Genus theory

What if ${h(D) > 1}$? Sometimes, we can still figure out which primes go where just by taking mods.

Let ${Q \in \text{Cl}(D)}$. Then it represents some residue classes of ${(\mathbb Z/D\mathbb Z)^\times}$. In that case we call the set of residue classes represented the genus of the quadratic form ${Q}$.

Example 17 (Genus theory of ${D = -20}$)

Consider ${D = -20}$, with

$\displaystyle \ker(\chi_D) = \left\{ 1, 3, 7, 9 \right\} \subseteq (\mathbb Z/D\mathbb Z)^\times.$

We consider the two elements of ${\text{Cl}(D)}$:

• ${x^2 + 5y^2}$ represents ${1, 9 \in (\mathbb Z/20\mathbb Z)^\times}$.
• ${2x^2+2xy+3y^2}$ represents ${3, 7 \in (\mathbb Z/20\mathbb Z)^\times}$.

Now suppose for example that ${p \equiv 9 \pmod{20}}$. It must be represented by one of these two quadratic forms, but the latter form is never ${9 \pmod{20}}$ and so it must be the first one. Thus we conclude that

• ${p = x^2+5y^2}$ if and only if ${p \equiv 1, 9 \pmod{20}}$.
• ${p = 2x^2 + 2xy + 3y^2}$ if and only if ${p \equiv 3, 7 \pmod{20}}$.

The thing that makes this work is that each genus appears exactly once. We are not always so lucky: for example when ${D = -108}$ we have that

Example 18 (Genus theory of ${D = -108}$)

The two elements of ${\text{Cl}(-108)}$ are:

• ${x^2+27y^2}$, which represents exactly the ${1 \pmod 3}$ elements of ${(\mathbb Z/D\mathbb Z)^\times}$.
• ${4x^2 \pm 2xy + 7y^2}$, which also represents exactly the ${1 \pmod 3}$ elements of ${(\mathbb Z/D\mathbb Z)^\times}$.

So the best we can conclude is that ${p = x^2+27y^2}$ OR ${p = 4x^2\pm2xy+7y^2}$ if and only if ${p \equiv 1 \pmod 3}$ This is because the two distinct quadratic forms of discriminant ${-108}$ happen to have the same genus.

We now prove that:

Theorem 19 (Genii are cosets of ${\ker(\chi_D)}$)

Let ${D}$ be a discriminant and consider the Legendre character ${\chi_D}$.

• The genus of the principal form of discriminant ${D}$ constitutes a subgroup ${H}$ of ${\ker(\chi_D)}$, which we call the principal genus.
• Any genus of a quadratic form in ${\text{Cl}(D)}$ is a coset of the principal genus ${H}$ in ${\ker(\chi_D)}$.

Proof: For the first part, we aim to show ${H}$ is multiplicatively closed. For ${D \equiv 0 \pmod 4}$, ${D = -4n}$ we use the fact that

$\displaystyle (x^2+ny^2)(u^2+nv^2) = (xu \pm nyv)^2 + n(xv \mp yu)^2.$

For ${D \equiv 1 \pmod 4}$, we instead appeal to another “magic” identity

$\displaystyle 4\left( x^2+xy+\frac{1-D}{4}y^2 \right) \equiv (2x+y)^2 \pmod D$

and it follows from here that ${H}$ is actually the set of squares in ${(\mathbb Z/D\mathbb Z)^\times}$, which is obviously a subgroup.

Now we show that other quadratic forms have genus equal to a coset of the principal genus. For ${D \equiv 0 \pmod 4}$, with ${D = -4n}$ we can write

$\displaystyle a(ax^2+bxy+cy^2) = (ax+b/2 y)^2 + ny^2$

and thus the desired coset is shown to be ${a^{-1} H}$. As for ${D \equiv 1 \pmod 4}$, we have

$\displaystyle 4a \cdot (ax^2+bxy+cy^2) = (2ax + by)^2 - Dy^2 \equiv (2ax+by)^2 \pmod D$

so the desired coset is also ${a^{-1} H}$, since ${H}$ was the set of squares. $\Box$

Thus every genus is a coset of ${H}$ in ${\ker(\chi_D)}$. Thus:

Definition 20

We define the quotient group

$\displaystyle \text{Gen}(D) = \ker(\chi_D) / H$

which is the set of all genuses in discriminant ${D}$. One can view this as an abelian group by coset multiplication.

Thus there is a natural map

$\displaystyle \Phi_D : \text{Cl}(D) \twoheadrightarrow \text{Gen}(D).$

(The map is surjective by Theorem~14.) We also remark than ${\text{Gen}(D)}$ is quite well-behaved:

Proposition 21 (Structure of ${\text{Gen}(D)}$)

The group ${\text{Gen}(D)}$ is isomorphic to ${(\mathbb Z/2\mathbb Z)^{\oplus m}}$ for some integer ${m}$.

Proof: Observe that ${H}$ contains all the squares of ${\ker(\chi_D)}$: if ${f}$ is the principal form then ${f(t,0) = t^2}$. Thus claim each element of ${\text{Gen}(D)}$ has order at most ${2}$, which implies the result since ${\text{Gen}(D)}$ is a finite abelian group. $\Box$

In fact, one can compute the order of ${\text{Gen}(D)}$ exactly, but for this post I Will just state the result.

Theorem 22 (Order of ${\text{Gen}(D)}$)

Let ${D < 0}$ be a discriminant, and let ${r}$ be the number of distinct odd primes which divide ${D}$. Define ${\mu}$ by:

• ${\mu = r}$ if ${D \equiv 1 \pmod 4}$.
• ${\mu = r}$ if ${D = -4n}$ and ${n \equiv 3 \pmod 4}$.
• ${\mu = r+1}$ if ${D = -4n}$ and ${n \equiv 1,2 \pmod 4}$.
• ${\mu = r+1}$ if ${D = -4n}$ and ${n \equiv 4 \pmod 8}$.
• ${\mu = r+2}$ if ${D = -4n}$ and ${n \equiv 0 \pmod 8}$.

Then ${\left\lvert \text{Gen}(D) \right\rvert = 2^{\mu-1}}$.

## 6. Composition

We have already used once the nice identity

$\displaystyle (x^2+ny^2)(u^2+nv^2) = (xu \pm nyv)^2 + n(xv \mp yu)^2.$

We are going to try and generalize this for any two quadratic forms in ${\text{Cl}(D)}$. Specifically,

Proposition 23 (Composition defines a group operation)

Let ${f,g \in \text{Cl}(D)}$. Then there is a unique ${h \in \text{Cl}(D)}$ and bilinear forms ${B_i(x,y,z,w) = a_ixz + b_ixw + c_iyz + d_iyw}$ for ${i=1,2}$ such that

• ${f(x,y) g(z,w) = h(B_1(x,y,z,w), B_2(x,y,z,w))}$.
• ${a_1b_2 - a_2b_1 = +f(1,0)}$.
• ${a_1c_2 - a_2c_1 = +g(1,0)}$.

In fact, without the latter two constraints we would instead have ${a_1b_2 - a_2b_1 = \pm f(1,0)}$ and ${a_1c_2 - a_2c_1 = \pm g(1,0)}$, and each choice of signs would yield one of four (possibly different) forms. So requiring both signs to be positive makes this operation well-defined. (This is why we like proper equivalence; it gives us a well-defined group structure, whereas with improper equivalence it would be impossible to put a group structure on the forms above.)

Taking this for granted, we then have that

Theorem 24 (Form class group)

Let ${D \equiv 0, 1 \pmod 4}$, ${D < 0}$ be a discriminant. Then ${\text{Cl}(D)}$ becomes an abelian group under composition, where

• The identity of ${\text{Cl}(D)}$ is the principal form, and
• The inverse of the form ${ax^2+bxy+cy^2}$ is ${ax^2-bxy+cy^2}$.

This group is called the form class group.

We then have a group homomorphism

$\displaystyle \Phi_D : \text{Cl}(D) \twoheadrightarrow \text{Gen}(D).$

Observe that ${ax^2 + bxy + cy^2}$ and ${ax^2 - bxy + cy^2}$ are inverses and that their ${\Phi_D}$ images coincide (being improperly equivalent); this is expressed in the fact that ${\text{Gen}(D)}$ has elements of order ${\le 2}$. As another corollary, the number of elements of ${\text{Cl}(D)}$ with a given genus is always a power of two.

We now define:

Definition 25

An integer ${n \ge 1}$ is convenient if the following equivalent conditions hold:

• The principal form ${x^2+ny^2}$ is the only reduced form with the principal genus.
• ${\Phi_D}$ is injective (hence an isomorphism).
• ${\left\lvert h(D) \right\rvert = 2^{\mu-1}}$.

Thus we arrive at the following corollary:

Corollary 26 (Convenient numbers have nice representations)

Let ${n \ge 1}$ be convenient. Then ${p}$ is of the form ${x^2+ny^2}$ if and only if ${p}$ lies in the principal genus.

Hence the represent-ability depends only on ${p \pmod{4n}}$.

OEIS A000926 lists 65 convenient numbers. This sequence is known to be complete except for at most one more number; moreover the list is complete assuming the Grand Riemann Hypothesis.

## 7. Cubic and quartic reciprocity

To treat the cases where ${n}$ is not convenient, the correct thing to do is develop class field theory. However, we can still make a little bit more progress if we bring higher reciprocity theorems to bear: we’ll handle the cases ${n=27}$ and ${n=64}$, two examples of numbers which are not convenient.

### 7.1. Cubic reciprocity

First, we prove that

Theorem 27 (On ${p = x^2+27y^2}$)

A prime ${p > 3}$ is of the form ${x^2+27y^2}$ if and only if ${p \equiv 1 \pmod 3}$ and ${2}$ is a cubic residue modulo ${p}$.

To do this we use cubic reciprocity, which requires working in the Eisenstein integers ${\mathbb Z[\omega]}$ where ${\omega}$ is a cube root of unity. There are six units in ${\mathbb Z[\omega]}$ (the sixth roots of unity), hence each nonzero number has six associates (differing by a unit), and the ring is in fact a PID.

Now if we let ${\pi}$ be a prime not dividing ${3}$, and ${\alpha}$ is coprime to ${\pi}$, then we can define the cubic Legendre symbol by setting

$\displaystyle \left( \frac{\alpha}{\pi} \right)_3 \equiv \alpha^{\frac13(N\pi-1)} \pmod \pi \in \left\{ 1, \omega, \omega^2 \right\}.$

Moreover, we can define a primary prime ${\pi \nmid 3}$ to be one such that ${\pi \equiv -1 \pmod 3}$; given any prime exactly one of the six associates is primary. We then have the following reciprocity theorem:

Theorem 28 (Cubic reciprocity)

If ${\pi}$ and ${\theta}$ are disjoint primary primes in ${\mathbb Z[\omega]}$ then

$\displaystyle \left( \frac{\pi}{\theta} \right)_3 = \left( \frac{\theta}{\pi} \right)_3.$

We also have the following supplementary laws: if ${\pi = (3m-1) + 3n\omega}$, then

$\displaystyle \left( \frac{\omega}{\pi} \right)_3 = \omega^{m+n} \qquad\text{and}\qquad \left( \frac{1-\omega}{\pi} \right)_3 = \omega^{2m}.$

The first supplementary law is for the unit (analogous to ${(-1/p)}$) while the second reciprocity law handles the prime divisors of ${3 = -\omega^2(1-\omega)^2}$ (analogous to ${(2/p)}$.)

We can tie this back into ${\mathbb Z}$ as follows. If ${p \equiv 1 \pmod 3}$ is a rational prime then it is represented by ${x^2+xy+y^2}$, and thus we can put ${p = \pi \overline{\pi}}$ for some prime ${\pi}$, ${N(\pi) = p}$. Consequently, we have a natural isomorphism

$\displaystyle \mathbb Z[\omega] / \pi \mathbb Z[\omega] \cong \mathbb Z / p \mathbb Z.$

Therefore, we see that a given ${a \in (\mathbb Z/p\mathbb Z)^\times}$ is a cubic residue if and only if ${(\alpha/\pi)_3 = 1}$.

In particular, we have the following corollary, which is all we will need:

Corollary 29 (When ${2}$ is a cubic residue)

Let ${p \equiv 1 \pmod 3}$ be a rational prime, ${p > 3}$. Write ${p = \pi \overline{\pi}}$ with ${\pi}$ primary. Then ${2}$ is a cubic residue modulo ${p}$ if and only if ${\pi \equiv 1 \pmod 2}$.

Proof: By cubic reciprocity:

$\displaystyle \left( \frac{2}{\pi} \right)_3 = \left( \frac{\pi}{2} \right)_3 \equiv \pi^{\frac13(N2-1)} \equiv \pi \pmod 2.$

$\Box$

Now we give the proof of Theorem~27. Proof: First assume

$\displaystyle p = x^2+27y^2 = \left( x+3\sqrt 3 y \right)\left( x-3\sqrt 3 y \right).$

Let ${\pi = x + 3 \sqrt{-3} y = (x+3y) + 6y\omega}$ be primary, noting that ${\pi \equiv 1 \pmod 2}$. Now clearly ${p \equiv 1 \pmod 3}$, so done by corollary.

For the converse, assume ${p \equiv 1 \pmod 3}$, ${p = \pi \overline{\pi}}$ with ${\pi}$ primary and ${\pi \equiv 1 \pmod 2}$. If we set ${\pi = a + b\omega}$ for integers ${a}$ and ${b}$, then the fact that ${\pi \equiv 1 \pmod 2}$ and ${\pi \equiv -1 \pmod 3}$ is enough to imply that ${6 \mid b}$ (check it!). Moreover,

$\displaystyle p = a^2-ab+b^2 = \left( a - \frac{1}{2} b \right)^2 + 27 \left( \frac16b \right)^2$

as desired. $\Box$

### 7.2. Quartic reciprocity

This time we work in ${\mathbb Z[i]}$, for which there are four units ${\pm 1}$, ${\pm i}$. A prime is primary if ${\pi \equiv 1 \pmod{2+2i}}$; every prime not dividing ${2 = -i(1+i)^2}$ has a unique associate which is primary. Then we can as before define

$\displaystyle \alpha^{\frac14(N\pi-1)} \equiv \left( \frac{\alpha}{\pi} \right)_4 \pmod{\pi} \in \left\{ \pm 1, \pm i \right\}$

where ${\pi}$ is primary, and ${\alpha}$ is nonzero mod ${\pi}$. As before ${p \equiv 1 \pmod 4}$, ${p = \pi\overline{\pi}}$ we have that ${a}$ is a quartic residue modulo ${p}$ if and only if ${\left( a/\pi \right)_4 = 1}$ thanks to the isomorphism

$\displaystyle \mathbb Z[i] / \pi \mathbb Z[i] \cong \mathbb Z / p \mathbb Z.$

Now we have

Theorem 30 (Quartic reciprocity)

If ${\pi}$ and ${\theta}$ are distinct primary primes in ${\mathbb Z[i]}$ then

$\displaystyle \left( \frac{\theta}{\pi} \right)_4 = \left( \frac{\pi}{\theta} \right)_4 (-1)^{\frac{1}{16}(N\theta-1)(N\pi-1)}.$

We also have supplementary laws that state that if ${\pi = a+bi}$ is primary, then

$\displaystyle \left( \frac{i}{\pi} \right)_4 = i^{-\frac{1}{2}(a-1)} \qquad\text{and}\qquad \left( \frac{1+i}{\pi} \right)_4 = i^{\frac14(a-b-b^2-1)}.$

Again, the first law handles units, and the second law handles the prime divisors of ${2}$. The corollary we care about this time in fact uses only the supplemental laws:

Corollary 31 (When ${2}$ is a quartic residue)

Let ${p \equiv 1 \pmod 4}$ be a prime, and put ${p = \pi\overline{\pi}}$ with ${\pi = a+bi}$ primary. Then

$\displaystyle \left( \frac{2}{\pi} \right)_4 = i^{-b/2}$

and in particular ${2}$ is a quartic residue modulo ${p}$ if and only if ${b \equiv 0 \pmod 8}$.

Proof: Note that ${2 = i^3(1+i)^2}$ and applying the above. Therefore

$\displaystyle \left( \frac{2}{\pi} \right)_4 = \left( \frac{i}{\pi} \right)_4^3 \left( \frac{1+i}{\pi} \right)_4^2 = i^{-\frac32(a-1)} \cdot i^{\frac12(a-b-b^2-1)} = i^{-(a-1) - \frac{1}{2} b(b+1)}.$

Now we assumed ${a+bi}$ is primary. We claim that

$\displaystyle a - 1 + \frac{1}{2} b^2 \equiv 0 \pmod 4.$

Note that since ${(a+bi)-1}$ was is divisible by ${2+2i}$, hence ${N(2+2i)=8}$ divides ${(a-1)^2+b^2}$. Thus

$\displaystyle 2(a-1) + b^2 \equiv 2(a-1) + (a-1)^2 \equiv (a-1)(a-3) \equiv 0 \pmod 8$

since ${a}$ is odd and ${b}$ is even. Finally,

$\displaystyle \left( \frac{2}{\pi} \right)_4 = i^{-(a-1) - \frac{1}{2} b(b+1)} = i^{-\frac{1}{2} b + (a-1+\frac{1}{2} b^2)} \equiv i^{-\frac{1}{2} b} \pmod p.$

$\Box$

From here we quickly deduce

Theorem 32 (On ${p = x^2+64y^2}$)

If ${p > 2}$ is prime, then ${p = x^2+64y^2}$ if and only if ${p \equiv 1 \pmod 4}$ and ${2}$ is a quartic residue modulo ${p}$.

# Formal vs Functional Series (OR: Generating Function Voodoo Magic)

Epistemic status: highly dubious. I found almost no literature doing anything quite like what follows, which unsettles me because it makes it likely that I’m overcomplicating things significantly.

## 1. Synopsis

Recently I was working on an elegant problem which was the original problem 6 for the 2015 International Math Olympiad, which reads as follows:

Problem

[IMO Shortlist 2015 Problem C6] Let ${S}$ be a nonempty set of positive integers. We say that a positive integer ${n}$ is clean if it has a unique representation as a sum of an odd number of distinct elements from ${S}$. Prove that there exist infinitely many positive integers that are not clean.

Proceeding by contradiction, one can prove (try it!) that in fact all sufficiently large integers have exactly one representation as a sum of an even subset of ${S}$. Then, the problem reduces to the following:

Problem

Show that if ${s_1 < s_2 < \dots}$ is an increasing sequence of positive integers and ${P(x)}$ is a nonzero polynomial then we cannot have

$\displaystyle \prod_{j=1}^\infty (1 - x^{s_j}) = P(x)$

as formal series.

To see this, note that all sufficiently large ${x^N}$ have coefficient ${1 + (-1) = 0}$. Now, the intuitive idea is obvious: the root ${1}$ appears with finite multiplicity in ${P}$ so we can put ${P(x) = (1-x)^k Q(x)}$ where ${Q(1) \neq 0}$, and then we get that ${1-x}$ on the RHS divides ${P}$ too many times, right?

Well, there are some obvious issues with this “proof”: for example, consider the equality

$\displaystyle 1 = (1-x)(1+x)(1+x^2)(1+x^4)(1+x^8) \dots.$

The right-hand side is “divisible” by ${1-x}$, but the left-hand side is not (as a polynomial).

But we still want to use the idea of plugging ${x \rightarrow 1^-}$, so what is the right thing to do? It turns out that this is a complete minefield, and there are a lot of very subtle distinctions that seem to not be explicitly mentioned in many places. I think I have a complete answer now, but it’s long enough to warrant this entire blog post.

Here’s the short version: there’s actually two distinct notions of “generating function”, namely a “formal series” and “functional series”. They use exactly the same notation but are two different types of objects, and this ends up being the source of lots of errors, because “formal series” do not allow substituting ${x}$, while “functional series” do.

Spoiler: we’ll need the asymptotic for the partition function ${p(n)}$.

## 2. Formal Series ${\neq}$ Functional Series

I’m assuming you’ve all heard the definition of ${\sum_k c_kx^k}$. It turns out unfortunately that this isn’t everything: there are actually two types of objects at play here. They are usually called formal power series and power series, but for this post I will use the more descriptive names formal series and functional series. I’ll do everything over ${\mathbb C}$, but one can of course use ${\mathbb R}$ instead.

The formal series is easier to describe:

Definition 1

A formal series ${F}$ is an infinite sequence ${(a_n)_n = (a_0, a_1, a_2, \dots)}$ of complex numbers. We often denote it by ${\sum a_nx^n = a_0 + a_1x + a_2x^2 + \dots}$. The set of formal series is denoted ${\mathbb C[ [x] ]}$.

This is the “algebraic” viewpoint: it’s a sequence of coefficients. Note that there is no worry about convergence issues or “plugging in ${x}$”.

On the other hand, a functional series is more involved, because it has to support substitution of values of ${x}$ and worry about convergence issues. So here are the necessary pieces of data:

Definition 2

A functional series ${G}$ (centered at zero) is a function ${G : U \rightarrow \mathbb C}$, where ${U}$ is an open disk centered at ${0}$ or ${U = \mathbb C}$. We require that there exists an infinite sequence ${(c_0, c_1, c_2, \dots)}$ of complex numbers satisfying

$\displaystyle \forall z \in U: \qquad G(z) = \lim_{N \rightarrow \infty} \left( \sum_{k=0}^N c_k z^k \right).$

(The limit is take in the usual metric of ${\mathbb C}$.) In that case, the ${c_i}$ are unique and called the coefficients of ${G}$.

This is often written as ${G(x) = \sum_n c_n x^n}$, with the open set ${U}$ suppressed.

Remark 3

Some remarks on the definition of functional series:

• This is enough to imply that ${G}$ is holomorphic (and thus analytic) on ${U}$.
• For experts: note that I’m including the domain ${U}$ as part of the data required to specify ${G}$, which makes the presentation cleaner. Most sources do something with “radius of convergence”; I will blissfully ignore this, leaving this data implicitly captured by ${U}$.
• For experts: Perhaps non-standard, ${U \neq \{0\}}$. Otherwise I can’t take derivatives, etc.

Thus formal and functional series, despite having the same notation, have different types: a formal series ${F}$ is a sequence, while a functional series ${G}$ is a function that happens to be expressible as an infinite sum within its domain.

Of course, from every functional series ${G}$ we can extract its coefficients and make them into a formal series ${F}$. So, for lack of better notation:

Definition 4

If ${F = (a_n)_n}$ is a formal series, and ${G : U \rightarrow \mathbb C}$ is a functional series whose coefficients equal ${F}$, then we write ${F \simeq G}$.

## 3. Finite operations

Now that we have formal and functional series, we can define sums. Since these are different types of objects, we will have to run definitions in parallel and then ideally check that they respect ${\simeq}$.

For formal series:

Definition 5

Let ${F_1 = (a_n)_n}$ and ${F_2 = (b_n)_n}$ be formal series. Then we set

\displaystyle \begin{aligned} (a_n)_n \pm (b_n)_n &= (a_n \pm b_n)_n \\ (a_n)_n \cdot (b_n)_n &= \left( \textstyle\sum_{j=0}^n a_jb_{n-j} \right)_n. \end{aligned}

This makes ${\mathbb C[ [x] ]}$ into a ring, with identity ${(0,0,0,\dots)}$ and ${(1,0,0,\dots)}$.

We also define the derivative ${F = (a_n)_n}$ by ${F' = ((n+1)a_{n+1})_n}$.

It’s probably more intuitive to write these definitions as

\displaystyle \begin{aligned} \sum_n a_n x^n \pm \sum_n b_n x^n &= \sum_n (a_n \pm b_n) x^n \\ \left( \sum_n a_n x^n \right) \left( \sum_n b_n x^n \right) &= \sum_n \left( \sum_{j=0}^n a_jb_{n-j} \right) x^n \\ \left( \sum_n a_n x^n \right)' &= \sum_n na_n x^{n-1} \end{aligned}

and in what follows I’ll start to use ${\sum_n a_nx^n}$ more. But officially, all definitions for formal series are in terms of the coefficients alone; these presence of ${x}$ serves as motivation only.

Exercise 6

Show that if ${F = \sum_n a_nx^n}$ is a formal series, then it has a multiplicative inverse if and only if ${a_0 \neq 0}$.

On the other hand, with functional series, the above operations are even simpler:

Definition 7

Let ${G_1 : U \rightarrow \mathbb C}$ and ${G_2 : U \rightarrow \mathbb C}$ be functional series with the same domain ${U}$. Then ${G_1 \pm G_2}$ and ${G_1 \cdot G_2}$ are defined pointwise.

If ${G : U \rightarrow \mathbb C}$ is a functional series (hence holomorphic), then ${G'}$ is defined poinwise.

If ${G}$ is nonvanishing on ${U}$, then ${1/G : U \rightarrow \mathbb C}$ is defined pointwise (and otherwise is not defined).

Now, for these finite operations, everything works as you expect:

Theorem 8 (Compatibility of finite operations)

Suppose ${F}$, ${F_1}$, ${F_2}$ are formal series, and ${G}$, ${G_1}$, ${G_2}$ are functional series ${U \rightarrow \mathbb C}$. Assume ${F \simeq G}$, ${F_1 \simeq G_1}$, ${F_2 \simeq G_2}$.

• ${F_1 \pm F_2 \simeq G_1 \pm G_2}$, ${F_1 \cdot F_2 = G_1 \cdot G_2}$.
• ${F' \simeq G'}$.
• If ${1/G}$ is defined, then ${1/F}$ is defined and ${1/F \simeq 1/G}$.

So far so good: as long as we’re doing finite operations. But once we step beyond that, things begin to go haywire.

## 4. Limits

We need to start considering limits of ${(F_k)_k}$ and ${(G_k)_k}$, since we are trying to make progress towards infinite sums and products. Once we do this, things start to burn.

Definition 9

Let ${F_1 = \sum_n a_n x^n}$ and ${F_2 = \sum_n b_n x^n}$ be formal series, and define the difference by

$\displaystyle d(F_1, F_2) = \begin{cases} 2^{-n} & a_n \neq b_n, \; n \text{ minimal} \\ 0 & F_1 = F_2. \end{cases}$

This function makes ${\mathbb C[[x]]}$ into a metric space, so we can discuss limits in this space. Actually, it is a normed vector space obtained by ${\left\lVert F \right\rVert = d(F,0)}$ above.

Thus, ${\lim_{k \rightarrow \infty} F_k = F}$ if each coefficient of ${x^n}$ eventually stabilizes as ${k \rightarrow \infty}$. For example, as formal series we have that ${(1,-1,0,0,\dots)}$, ${(1,0,-1,0,\dots)}$, ${(1,0,0,-1,\dots)}$ converges to ${1 = (1,0,0,0\dots)}$, which we write as

$\displaystyle \lim_{k \rightarrow \infty} (1 - x^k) = 1 \qquad \text{as formal series}.$

As for functional series, since they are functions on the same open set ${U}$, we can use pointwise convergence or the stronger uniform convergence; we’ll say explicitly which one we’re doing.

Example 10 (Limits don’t work at all)

In what follows, ${F_k \simeq G_k}$ for every ${k}$.

• Here is an example showing that if ${\lim_k F_k = F}$, the functions ${G_k}$ may not converge even pointwise. Indeed, just take ${F_k = 1 - x^k}$ as before, and let ${U = \{ z : |z| < 2 \}}$.
• Here is an example showing that even if ${G_k \rightarrow G}$ uniformly, ${\lim_k F_k}$ may not exist. Take ${G_k = 1 - 1/k}$ as constant functions. Then ${G_k \rightarrow 1}$, but ${\lim_k F_k}$ doesn’t exist because the constant term never stabilizes (in the combinatorial sense).
• The following example from this math.SE answer by Robert Israel shows that it’s possible that ${F = \lim_k F_k}$ exists, and ${G_k \rightarrow G}$ pointwise, and still ${F \not\simeq G}$. Let ${U}$ be the open unit disk, and set

\displaystyle \begin{aligned} A_k &= \{z = r e^{i\theta} \mid 2/k \le r \le 1, \; 0 \le \theta \le 2\pi - 1/k\} \\ B_k &= \left\{ |z| \le 1/k \right\} \end{aligned}

for ${k \ge 1}$. By Runge theorem there’s a polynomial ${p_k(z)}$ such that

$\displaystyle |p_k(z) - 1/z^{k}| < 1/k \text{ on } A_k \qquad \text{and} \qquad |p_k(z)| < 1/k \text{ on }B_k.$

Then

$\displaystyle G_k(z) = z^{k+1} p_k(z)$

is the desired counterexample (with ${F_k}$ being the sequence of coefficients from ${G}$). Indeed by construction ${\lim_k F_k = 0}$, since ${\left\lVert F_k \right\rVert \le 2^{-k}}$ for each ${k}$. Alas, ${|g_k(z) - z| \le 2/k}$ for ${z \in A_k \cup B_k}$, so ${G_k \rightarrow z}$ converges pointwise to the identity function.

To be fair, we do have the following saving grace:

Theorem 11 (Uniform convergence and both limits exist is sufficient)

Suppose that ${G_k \rightarrow G}$ converges uniformly. Then if ${F_k \simeq G_k}$ for every ${k}$, and ${\lim_k F_k = F}$, then ${F \simeq G}$.

Proof: Here is a proof, copied from this math.SE answer by Joey Zhou. WLOG ${G = 0}$, and let ${g_n(z) = \sum{a^{(n)}_kz^k}}$. It suffices to show that ${a_k = 0}$ for all ${k}$. Choose any ${0. By Cauchy’s integral formula, we have

\displaystyle \begin{aligned} \left|a_k - a^{(n)}_k\right| &= \left|\frac{1}{2\pi i} \int\limits_{|z|=r}{\frac{g(z)-g_n(z)}{z^{n+1}}\text{ d}z}\right| \\ & \le\frac{1}{2\pi}(2\pi r)\frac{1}{r^{n+1}}\max\limits_{|z|=r}{|g(z)-g_n(z)|} \xrightarrow{n\rightarrow\infty} 0 \end{aligned}

since ${g_n}$ converges uniformly to ${g}$ on ${U}$. Hence, ${a_k = \lim\limits_{n\rightarrow\infty}{a^{(n)}_k}}$. Since ${a^{(n)}_k = 0}$ for ${n\ge k}$, the result follows. $\Box$

The take-away from this section is that limits are relatively poorly behaved.

## 5. Infinite sums and products

Naturally, infinite sums and products are defined by taking the limit of partial sums and limits. The following example (from math.SE again) shows the nuances of this behavior.

Example 12 (On ${e^{1+x}}$)

The expression

$\displaystyle \sum_{n=0}^\infty \frac{(1+x)^n}{n!} = \lim_{N \rightarrow \infty} \sum_{n=0}^N \frac{(1+x)^n}{n!}$

does not make sense as a formal series: we observe that for every ${N}$ the constant term of the partial sum changes.

But this does converge (uniformly, even) to a functional series on ${U = \mathbb C}$, namely to ${e^{1+x}}$.

Exercise 13

Let ${(F_k)_{k \ge 1}}$ be formal series.

• Show that an infinite sum ${\sum_{k=1}^\infty F_k(x)}$ converges as formal series exactly when ${\lim_k \left\lVert F_k \right\rVert = 0}$.
• Assume for convenience ${F_k(0) = 1}$ for each ${k}$. Show that an infinite product ${\prod_{k=0}^{\infty} (1+F_k)}$ converges as formal series exactly when ${\lim_k \left\lVert F_k-1 \right\rVert = 0}$.

Now the upshot is that one example of a convergent formal sum is the expression ${\lim_{N} \sum_{n=0}^N a_nx^n}$ itself! This means we can use standard “radius of convergence” arguments to transfer a formal series into functional one.

Theorem 14 (Constructing ${G}$ from ${F}$)

Let ${F = \sum a_nx^n}$ be a formal series and let

$\displaystyle r = \frac{1}{\limsup_n \sqrt[n]{|c_n|}}.$

If ${r > 0}$ then there exists a functional series ${G}$ on ${U = \{ |z| < r \}}$ such that ${F \simeq G}$.

Proof: Let ${F_k}$ and ${G_k}$ be the corresponding partial sums of ${c_0x^0}$ to ${c_kx^k}$. Then by Cauchy-Hadamard theorem, we have ${G_k \rightarrow G}$ uniformly on (compact subsets of) ${U}$. Also, ${\lim_k F_k = F}$ by construction. $\Box$

This works less well with products: for example we have

$\displaystyle 1 \equiv (1-x) \prod_{j \ge 0} (1+x^{2^j})$

as formal series, but we can’t “plug in ${x=1}$”, for example,

## 6. Finishing the original problem

We finally return to the original problem: we wish to show that the equality

$\displaystyle P(x) = \prod_{j=1}^\infty (1 - x^{s_j})$

cannot hold as formal series. We know that tacitly, this just means

$\displaystyle \lim_{N \rightarrow \infty} \prod_{j=1}^N\left( 1 - x^{s_j} \right) = P(x)$

as formal series.

Here is a solution obtained only by only considering coefficients, presented by Qiaochu Yuan from this MathOverflow question.

Both sides have constant coefficient ${1}$, so we may invert them; thus it suffices to show we cannot have

$\displaystyle \frac{1}{P(x)} = \frac{1}{\prod_{j=1}^{\infty} (1 - x^{s_j})}$

as formal power series.

The coefficients on the LHS have asymptotic growth a polynomial times an exponential.

On the other hand, the coefficients of the RHS can be shown to have growth both strictly larger than any polynomial (by truncating the product) and strictly smaller than any exponential (by comparing to the growth rate in the case where ${s_j = j}$, which gives the partition function ${p(n)}$ mentioned before). So the two rates of growth can’t match.

# New algebra handouts on my website

For olympiad students: I have now published some new algebra handouts. They are:

• Introduction to Functional Equations, which cover the basic techniques and theory for FE’s typically appearing on olympiads like USA(J)MO.
• Monsters, an advanced handout which covers functional equations that have pathological solutions. It covers in detail the solutions to Cauchy functional equation.
• Summation, which is a compilation of various types of olympiad-style sums like generating functions and multiplicative number theory.

• English, notes on proof-writing that I used at the 2016 MOP (Mathematical Olympiad Summer Program).

You can download all these (and other handouts) from my MIT website. Enjoy!

# Miller-Rabin (for MIT 18.434)

This is a transcript of a talk I gave as part of MIT’s 18.434 class, the “Seminar in Theoretical Computer Science” as part of MIT’s communication requirement. (Insert snarky comment about MIT’s CI-* requirements here.) It probably would have made a nice math circle talk for high schoolers but I felt somewhat awkward having to present it to a bunch of students who were clearly older than me.

## 1. Preliminaries

### 1.1. Modular arithmetic

In middle school you might have encountered questions such as

Exercise 1

What is ${3^{2016} \pmod{10}}$?

You could answer such questions by listing out ${3^n}$ for small ${n}$ and then finding a pattern, in this case of period ${4}$. However, for large moduli this “brute-force” approach can be time-consuming.

Fortunately, it turns out that one can predict the period in advance.

Theorem 2 (Euler’s little theorem)

1. Let ${\gcd(a,n) = 1}$. Then ${a^{\phi(n)} \equiv 1 \pmod n}$.
2. (Fermat) If ${p}$ is a prime, then ${a^p \equiv a \pmod p}$ for every ${a}$.

Proof: Part (a) is a special case of Lagrange’s Theorem: if ${G}$ is a finite group and ${g \in G}$, then ${g^{|G|}}$ is the identity element. Now select ${G = (\mathbb Z/n\mathbb Z)^\times}$. Part (b) is the case ${n=p}$. $\Box$

Thus, in the middle school problem we know in advance that ${3^4 \equiv 1 \pmod{10}}$ because ${\phi(10) = 4}$. This bound is sharp for primes:

Theorem 3 (Primitive roots)

For every ${p}$ prime there’s a ${g \pmod p}$ such that ${g^{p-1} \equiv 1 \pmod p}$ but ${g^{k} \not\equiv 1 \pmod p}$ for any ${k < p-1}$. (Hence ${(\mathbb Z/p\mathbb Z)^\times \cong \mathbb Z/(p-1)}$.)

For a proof, see the last exercise of my orders handout.

We will define the following anyways:

Definition 4

We say an integer ${n}$ (thought of as an exponent) annihilates the prime ${p}$ if

• ${a^n \equiv 1 \pmod p}$ for every ${a \not\equiv 0 \pmod p}$,
• or equivalently, ${p-1 \mid n}$.

Theorem 5 (All/nothing)

Suppose an exponent ${n}$ does not annihilate the prime ${p}$. Then more than ${\frac{1}{2} p}$ of ${x \pmod p}$ satisfy ${x^n \not\equiv 1 \pmod p}$.

Proof: Much stronger result is true: in ${x^n \equiv 1 \pmod p}$ then ${x^{\gcd(n,p-1)} \equiv 1 \pmod p}$. $\Box$

### 1.2. Repeated Exponentiation

Even without the previous facts, one can still do:

Theorem 6 (Repeated exponentation)

Given ${x}$ and ${n}$, one can compute ${x^n \pmod N}$ with ${O(\log n)}$ multiplications mod ${N}$.

The idea is that to compute ${x^{600} \pmod N}$, one just multiplies ${x^{512+64+16+8}}$. All the ${x^{2^k}}$ can be computed in ${k}$ steps, and ${k \le \log_2 n}$.

### 1.3. Chinese remainder theorem

In the middle school problem, we might have noticed that to compute ${3^{2016} \pmod{10}}$, it suffices to compute it modulo ${5}$, because we already know it is odd. More generally, to understand ${\pmod n}$ it suffices to understand ${n}$ modulo each of its prime powers.

The formal statement, which we include for completeness, is:

Theorem 7 (Chinese remainder theorem)

Let ${p_1}$, ${p_2}$, \dots, ${p_m}$ be distinct primes, and ${e_i \ge 1}$ integers. Then there is a ring isomorphism given by the natural projection

$\displaystyle \mathbb Z/n \rightarrow \prod_{i=1}^m \mathbb Z/p_i^{e_i}.$

In particular, a random choice of ${x \pmod n}$ amounts to a random choice of ${x}$ mod each prime power.

For an example, in the following table we see the natural bijection between ${x \pmod{15}}$ and ${(x \pmod 3, x \pmod 5)}$.

$\displaystyle \begin{array}{c|cc} x \pmod{15} & x \pmod{3} & x \pmod{5} \\ \hline 0 & 0 & 0 \\ 1 & 1 & 1 \\ 2 & 2 & 2 \\ 3 & 0 & 3 \\ 4 & 1 & 4 \\ 5 & 2 & 0 \\ 6 & 0 & 1 \\ 7 & 1 & 2 \end{array} \quad \begin{array}{c|cc} x \pmod{15} & x \pmod{3} & x \pmod{5} \\ \hline 8 & 2 & 3 \\ 9 & 0 & 4 \\ 10 & 1 & 0 \\ 11 & 2 & 1 \\ 12 & 0 & 2 \\ 13 & 1 & 3 \\ 14 & 2 & 4 \\ && \end{array}$

## 2. The RSA algorithm

This simple number theory is enough to develop the so-called RSA algorithm. Suppose Alice wants to send Bob a message ${M}$ over an insecure channel. They can do so as follows.

• Bob selects integers ${d}$, ${e}$ and ${N}$ (with ${N}$ huge) such that ${N}$ is a semiprime and

$\displaystyle de \equiv 1 \pmod{\phi(N)}.$

• Bob publishes both the number ${N}$ and ${e}$ (the public key) but keeps the number ${d}$ secret (the private key).
• Alice sends the number ${X = M^e \pmod N}$ across the channel.
• Bob computes

$\displaystyle X^d \equiv M^{de} \equiv M^1 \equiv M \pmod N$

and hence obtains the message ${M}$.

In practice, the ${N}$ in RSA is at least ${2000}$ bits long.

The trick is that an adversary cannot compute ${d}$ from ${e}$ and ${N}$ without knowing the prime factorization of ${N}$. So the security relies heavily on the difficulty of factoring.

Remark 8

It turns out that we basically don’t know how to factor large numbers ${N}$: the best known classical algorithms can factor an ${n}$-bit number in

$\displaystyle O\left( \exp\left( \frac{64}{9} n \log(n)^2 \right)^{1/3} \right)$

time (“general number field sieve”). On the other hand, with a quantum computer one can do this in ${O\left( n^2 \log n \log \log n \right)}$ time.

## 3. Primality Testing

Main question: if we can’t factor a number ${n}$ quickly, can we at least check it’s prime?

In what follows, we assume for simplicity that ${n}$ is squarefree, i.e. ${n = p_1 p_2 \dots p_k}$ for distinct primes ${p_k}$, This doesn’t substantially change anything, but it makes my life much easier.

### 3.1. Co-RP

Here is the goal: we need to show there is a random algorithm ${A}$ which does the following.

• If ${n}$ is composite then
• More than half the time ${A}$ says “definitely composite”.
• Occasionally, ${A}$ says “possibly prime”.
• If ${n}$ is prime, ${A}$ always says “possibly prime”.

If there is a polynomial time algorithm ${A}$ that does this, we say that PRIMES is in Co-RP. Clearly, this is a very good thing to be true!

### 3.2. Fermat

One idea is to try to use the converse of Fermat’s little theorem: given an integer ${n}$, pick a random number ${x \pmod n}$ and see if ${x^{n-1} \equiv 1 \pmod n}$. (We compute using repeated exponentiation.) If not, then we know for sure ${n}$ is not prime, and we call ${x}$ a Fermat witness modulo ${n}$.

How good is this test? For most composite ${n}$, pretty good:

Proposition 9

Let ${n}$ be composite. Assume that there is a prime ${p \mid n}$ such that ${n-1}$ does not annihilate ${p}$. Then over half the numbers mod ${n}$ are Fermat witnesses.

Proof: Apply the Chinese theorem then the “all-or-nothing” theorem. $\Box$
Unfortunately, if ${n}$ doesn’t satisfy the hypothesis, then all the ${\gcd(x,n) = 1}$ satisfy ${x^{n-1} \equiv 1 \pmod n}$!

Are there such ${n}$ which aren’t prime? Such numbers are called Carmichael numbers, but unfortunately they exist, the first one is ${561 = 3 \cdot 11 \cdot 17}$.

Remark 10

For ${X \gg 1}$, there are more than ${X^{1/3}}$ Carmichael numbers at most ${X}$.

Thus these numbers are very rare, but they foil the Fermat test.

Exercise 11

Show that a Carmichael number is not a semiprime.

### 3.3. Rabin-Miller

Fortunately, we can adapt the Fermat test to cover Carmichael numbers too. It comes from the observation that if ${n}$ is prime, then ${a^2 \equiv 1 \pmod n \implies a \equiv \pm 1 \pmod n}$.

So let ${n-1 = 2^s t}$, where ${t}$ is odd. For example, if ${n = 561}$ then ${560 = 2^4 \cdot 35}$. Then we compute ${x^t}$, ${x^{2t}}$, \dots, ${x^{n-1}}$. For example in the case ${n=561}$ and ${x=245}$:

$\displaystyle \begin{array}{c|r|rrr} & \mod 561 & \mod 3 & \mod 11 & \mod 17 \\ \hline x & 245 & -1 & 3 & 7 \\ \hline x^{35} & 122 & -1 & \mathbf 1 & 3 \\ x^{70} & 298 & \mathbf 1 & 1 & 9 \\ x^{140} & 166 & 1 & 1 & -4 \\ x^{280} & 67 & 1 & 1 & -1 \\ x^{560} & 1 & 1 & 1 & \mathbf 1 \end{array}$

And there we have our example! We have ${67^2 \equiv 1 \pmod{561}}$, so ${561}$ isn’t prime.

So the Rabin-Miller test works as follows:

• Given ${n}$, select a random ${x}$ and compute powers of ${x}$ as in the table.
• If ${x^{n-1} \not\equiv 1}$, stop, ${n}$ is composite (Fermat test).
• If ${x^{n-1} \equiv 1}$, see if the entry just before the first ${1}$ is ${-1}$. If it isn’t then we say ${x}$ is a RM-witness and ${n}$ is composite.
• Otherwise, ${n}$ is “possibly prime”.

How likely is probably?

Theorem 12

If ${n}$ is Carmichael, then over half the ${x \pmod n}$ are RM witnesses.

Proof: We sample ${x \pmod n}$ randomly again by looking modulo each prime (Chinese theorem). By the theorem on primitive roots, show that the probability the first ${-1}$ appears in any given row is ${\le \frac{1}{2}}$. This implies the conclusion. $\Box$

Exercise 13

Improve the ${\frac{1}{2}}$ in the problem to ${\frac34}$ by using the fact that Carmichael numbers aren’t semiprime.

### 3.4. AKS

In August 6, 2002, it was in fact shown that PRIMES is in P, using the deterministic AKS algorithm. However, in practice everyone still uses Miller-Rabin since the implied constants for AKS runtime are large.

# Mechanism Design and Revenue Equivalence

Happy Pi Day! I have an economics midterm on Wednesday, so here is my attempt at studying.

## 1. Mechanisms

The idea is as follows.

• We have ${N}$ people and a seller who wants to auction off a power drill.
• The ${i}$th person has a private value of at most ${\1000}$ on the power drill. We denote it by ${x_i \in [0,1000]}$.
• However, everyone knows the ${x_i}$ are distributed according to some measure ${\mu_i}$ supported on ${[0, 1000]}$. (let’s say a Radon measure, but I don’t especially care). Tacitly we assume ${\mu_i([0,1000]) = 1}$.

Definition 1

Consider a game ${M}$ played as follows:

• Each player ${i=1, \dots, N}$ makes a bid ${b_i}$ (which depends on how much they value the object)
• Based on all the bids ${\vec b = \left( b_1, \dots, b_N \right)}$, each player has a chance ${Q_i(\vec b) \in [0,1]}$ of actually obtaining the object. We call ${Q = \{Q_i\}_{i=1}^N}$ the allocation function and require ${\sum_i Q_i(\vec b) \le 1}$.
• Based on all the bids ${\vec b = \left( b_1, \dots, b_N \right)}$, each player makes a payment ${P_i(\vec b) \in \mathbb R_{\ge 0}}$. We call the ${P = \{P_i\}_{i=1}^n}$ the payment function. Note that one might have to pay even if they don’t get the drill!
• The utility of the ${i}$th player is

$\displaystyle U_i(\vec b) = Q_i(\vec b) \cdot x_i - P_i(\vec b)$

i.e. the expected chance they get the power drill minus the amount which they pay.

We call the pair ${(P,Q,\mu)}$ a mechanism.

For experts: we require that each ${P_i}$ and ${Q_i}$ is measurable. Right now this is not a very good definition, because there are no assumptions on what ${P}$ and ${Q}$ look like. Nonetheless, we’ll give some examples.

## 2. Examples of mechanisms

In the auction that you’d see in real life, we usually set

$\displaystyle Q = Q_{\text{highest}} \overset{\mathrm{def}}{=} \text{highest bidder wins}$

which is the simple rule that the highest bidder gets the power drill with probability ${1}$; if there is a tie, we pick one of the highest bidders at random.

In all the examples that follow, for simplicity let’s take the case ${N=2}$, and call the two players Anna and Elsa. We assume that both Anna and Elsa the value they place on the drill is uniform between ${[0,1000]}$. Finally, we give the auctioneer a name, say Hans.

### 2.1. First-price auction

The first-price auction is the one you’ve probably heard of: each player makes a bid and

• ${Q = Q_{\text{highest}}}$, and
• ${P}$ is defined by requiring the winner to pay their bid.

How do Anna and Elsa behave in this auction? Clearly no one will bid more than they think the drill is worth, because then they stand to lose utility if they happen to win the auction. But the truth is they actually will bid less than they think the drill is worth.

For concreteness, suppose Anna values the drill at ${\700}$. It doesn’t make sense for Anna to bid more than ${\700}$ obviously. But perhaps she should bid ${\699}$ — save a dollar. After all, what’s the chance that Elsa would bid right between ${\700}$ and ${\699}$? For that matter, Anna knows that Elsa has a ${50\%}$ chance of valuing the drill at less than ${\500}$. So if Anna bids ${\500}$, she has at least a ${50\%}$ of saving at least ${\200}$; it makes no sense for her to bid her true ${\700}$ value.

It gets better. Anna knows that Elsa isn’t stupid, and isn’t going to bid ${\500}$ even if her true value is ${\500}$. That is, Elsa is going to try and sell Hans short as well. Given this Anna can play the game of cheating Hans more aggressively, and so an ad infinitum.

Of course there’s a way to capture this idea of “I know that you know that I know that you know. . .”: we just compute the Nash equilibrium.

Proposition 2 (Nash eqilibirum of the first-price auction for ${N=2}$)

Consider a first-price auction where Anna and Elsa have values uniformly distributed in ${[0, 1000]}$. Suppose both Anna and Elsa bid ${x/2}$ if they have value ${x}$. Then this is a Nash equilibrium.

Proof: Suppose Anna values the drill at ${a}$ and wants to make a bid ${x}$. Elsa values the drill at ${e}$, and follows the equilibrium by bidding ${e/2}$. For a bid of ${x}$, Anna gets utility

$\displaystyle \begin{cases} a-x & x>e/2 \\ 0 & \text{otherwise} \end{cases}.$

The probability of winning with ${x}$ is thus ${\min(1, 2x/1000)}$ (this the probability that ${e < 2x}$) so the expected utility is

$\displaystyle (a-x)\min\left( 1, \frac{2x}{1000} \right) \le \frac{2}{1000} x(a-x).$

Hence we see ${x = a/2}$ maximizes Anna’s expected utility. $\Box$

The first-price auction is interesting because both players “lie” when bidding in the Nash equilibrium. For this reason we say that the first-price auction is not incentive compatible.

Just for interest, let’s compute how much money Hans is going to make off the drill in this equilibrium. The amount paid to him is equal to

$\displaystyle \mathbf E_{a,e} \left( \max(a/2, e/2) \right) = \frac{1000}{3}.$

To see this we had to use the fact that if two numbers in ${[0,1]}$ are chosen at random, the expected value of the larger is ${\frac23}$. Multiplying by ${1000/2 = 500}$ gives the answer: Hans expects to make ${\333}$.

### 2.2. Second-price auction

The second-price auction is the other one you’ve probably heard of: each player makes a bid and

• ${Q = Q_{\text{highest}}}$, and
• ${P}$ is defined by requiring the winner to pay the smallest amount needed to win, i.e. the second highest bid.

The fundamental difference is that in a second-price auction, a player “doesn’t need to be shy”: if they place a large bid, they don’t have to worry about possibly paying it.

Another way to think about is as a first-price auction with the property that the winning player can retroactively change their bid, provided they still win the auction. So unlike before there is no advantage to being honest.

Indeed, the second-price auction is incentive compatible in a very strong sense: bidding your true value is the best thing to do regardless of whether your opponents are playing optimally.

Proposition 3 (Second-price auctions are incentive compatible)

In a second-price auction, bidding truthfully is a weakly dominant strategy.

Proof: Easy. Check it. $\Box$

Just for interest, let’s compute how much money Hans is going to make off the drill in this equilibrium. This time he amount paid to him is equal to

$\displaystyle \mathbf E_{a,e} \left( \min(a, e) \right) = \frac{1000}{3}.$

Here we had to use the fact that if two numbers in ${[0,1]}$ are chosen at random, the expected value of the smaller is ${\frac13}$. This might come as a surprise: the expected revenue is ${\333}$ in this auction too.

### 2.3. All-pay auction

The all-pay auction is like lobbying. Each player makes a bid, and

• ${Q = Q_{\text{highest}}}$, and
• ${P}$ is defined by requiring everyone to pay their bid, regardless of whether they win the power drill or not.

This is clearly not incentive compatible. In fact, the Nash equilibrium is as follows:

Proposition 4 (Nash equilibirum of the all-pay auction)

Consider a first-price auction where Anna and Elsa have values uniformly distributed in ${[0, 1000]}$. Suppose both Anna and Elsa bid ${\frac{1}{2} \cdot 1000(x/1000)^2}$ if they have value ${x}$. Then this is a Nash equilibrium.

Proof: Omitted, but a fun and not-hard exercise if you like integrals. It will follow from a later result. $\Box$

Just for interest, let’s compute how much money Hans is going to make off the drill in this equilibrium. This time he amount paid to him is equal to

$\displaystyle \mathbf E_{a,e} \left( \frac{a^2}{2000} + \frac{b^2}{2000} \right) = \int_{0}^{1000} \frac{x^2}{1000} \; dx = \frac{1000}{3}.$

Surprise — same value again! This is a very special case of the Revenue Equivalence Theorem later.

### 2.4. Extortion

We’ve seen three examples that all magically gave ${\333}$ as the expected gain. So here’s a silly counterexample to show that not every auction is going to give ${\333}$ as an expected gain. It blatantly abuses the fact that we’ve placed almost no assumptions on ${P}$ and ${Q}$:

• ${Q = Q_{\text{highest}}}$, or any other ${Q}$ for that matter, and
• ${P}$ is defined by requiring both Anna and Elsa to give ${\1000000}$ to Hans.

This isn’t very much an auction at all: more like Hans just extracting money from Anna and Elsa. Hans is very happy with this arrangement, Anna and Elsa not so much. So we want an assumption on our auctions to prevent this silly example:

Definition 5

A mechanism ${(M, \sigma)}$ is voluntary (or individually rational) if ${u_i(x_i) \ge 0}$ for every ${x_i \in [0,1000]}$.

### 2.5. Second-price auction with reserve

Here is a less stupid example of how Hans can make more money. The second-price auction with reserve is the same as the second-price auction, except Hans himself also places a bid of ${R = 500}$. Thus if no one bids more than ${R}$, the item is not sold.

For the same reason as in the usual second-price auction, bidding truthfully is optimal for each players. The cases are:

• If both Anna and Elsa bid less than ${\500}$, no one gains anything.
• If both Anna and Elsa bids more than ${\500}$, the higher bidder wins and pays the lower bid.
• If exactly one player bids more than ${\500}$, that player wins and pays the bid for ${\500}$.

So Hans suffers some loss in the first case, but earns some extra money in the last case (when compared to the traditional second-price auction.) It turns out that if you do the computation, then Hans gets an expected profit of

$\displaystyle \frac{1250}{3} \approx \417$

meaning he earns another ${\80}$ or so by setting a reserve price.

## 3. Direct mechanisms

As it stands, ${b_i}$ might depend in complicated ways on the actual values ${x_i}$: for example in the first-price auction. We can capture this formalism as follows.

Definition 6

A direct mechanism is a pair ${M = (M, \sigma)}$ where

• ${(P,Q, \mu)}$ is a mechanism,
• ${\sigma = \{\sigma_i\}_{i=1}^N}$ is a Nash equilibrium of bidding strategies for the bidders. So in this equilibrium the ${i}$th player will bid ${\sigma_i(x_i)}$.

If ${\sigma = \mathrm{id}}$, meaning ${\sigma_i(x) \equiv x}$ for every ${i}$, then we say ${M}$ is incentive compatible.

So in other words, I’m equipping the mechanism ${M}$ with a particular Nash equilibrium ${\sigma}$. This is not standard, but I think it is harder to state the theorems in a non-confusing form otherwise.

Definition 7

Let ${M = (M, \sigma)}$ be a direct mechanism. Then we define ${p_i(x)}$, ${q_i(x)}$, ${u_i(x)}$ by

$\displaystyle u_i(x) = \mathbb E\left[ U_i(\vec b) \mid x_i = x \text{ and everyone follows } \sigma \right]$

For example, ${u_1(x)}$ is the expected utility of the 1st player conditioned on them having value ${x}$ for the power drill:

$\displaystyle u_1(x) = \int_{x_2=0}^{1000} \dots \int_{x_N=0}^{1000} U_1(\sigma_1(x), \sigma_2(x_2), \dots, \sigma_N(x_N)) \; d\mu_2 \dots d\mu_N.$

Similarly, let

\displaystyle \begin{aligned} p_i(x) &= \mathbb E\left[ P_i(\vec b) \mid x_i = x \text{ and everyone follows }\sigma \right] \\ q_i(x) &= \mathbb E\left[ Q_i(\vec b) \mid x_i = x \text{ and everyone follows }\sigma \right]. \end{aligned}

Note that ${p_i}$, ${q_i}$, ${u_i}$ depend not only on the mechanism ${M}$ itself but also on the attached equilibrium ${\sigma}$.

It’s important to realize the functions ${p_i}$, ${q_i}$, ${u_i}$ carry much more information than just ${P}$, ${Q}$, ${U}$. All three functions depend on the equilibrium strategy ${\sigma}$, which in turn depend on both ${P}$ and ${Q}$. Moreover, all three functions also depend on ${\mu}$. Hence for example ${q}$ actually depends indirectly on ${P}$ as well, because the choice of ${P}$ affects the resulting equilibrium ${\sigma}$.

Example 8 (Example of ${\sigma}$, ${q_i}$, ${p_i}$)

Let’s take the first-price auction with two players Anna and Elsa. If we call it ${M = (M, \sigma)}$ then as we described before we have:

• ${N = 2}$,
• ${\mu_1}$ and ${\mu_2}$ are uniform distributions over ${[0,1000]}$,
• ${P_i}$ is defined by having player ${i}$ pay their bid upon winning.
• ${Q = Q_{\text{highest}}}$

Moreover, the Nash equilibrium ${\sigma = (\sigma_1, \sigma_2)}$ is given by

$\displaystyle \sigma_i(x) = x/2$

for all ${x}$, as we saw earlier. Consequently, we have

• ${q_i(x) = x/1000}$ since the probability of winning (and hence winning the bid) is proportional to the value placed on the item (since ${\mu}$ is a uniform distribution).
• ${p_i(x) = x/2 \cdot x/1000}$ as the expected payment: there’s a ${x/1000}$ chance of winning, and a payment of ${x/2}$ if you do.

## 4. Equivalence of utility and payment

In what follows let ${i}$ be any index.

We now prove that

Lemma 9 (Envelope theorem)

Assume ${M = (M, \sigma)}$ is a direct mechanism. Then ${u}$ is convex, and

$\displaystyle u_i'(x) = q_i(x)$

except for at most countably many points at which ${u}$ is not differentiable.

Proof: Since ${\sigma}$ is an equilibrium, we have

$\displaystyle x \cdot q_i(x) - p_i(x) = u_i(x) \ge x \cdot q_i(b) - p_i(b) \qquad \forall b \in I.$

i.e. there is no benefit in lying and bidding ${b}$ rather than ${x}$.

First, let’s show that if ${x}$ is differentiable, then ${u_i'(x) = q_i(x)}$. We have that

$\displaystyle \lim_{h \rightarrow 0} \frac{u_i(x+h)-u_i(x)}{h} \ge \lim_{h \rightarrow 0} \frac{\left[ (x+h)q_i(x)-p_i(x) \right] - \left[ x \cdot q_i(x)-p_i(x) \right]}{h} = q(x).$

Similarly

$\displaystyle \lim_{h \rightarrow 0} \frac{u_i(x)-u_i(x-h)}{h} \le \lim_{h \rightarrow 0} \frac{\left[ x \cdot q_i(x)-p_i(x) \right] - \left[ (x-h)q_i(x)-p_i(x) \right]}{h} = q_i(x).$

This implies the limit, if it exists, must be ${q_i(x)}$. We’ll omit the proof that ${u}$ is differentiable almost everywhere, remarking that it follows from ${q_i(x)}$ being nondecreasing in ${x}$ (proved later). $\Box$

Theorem 10 (Utility equivalence)

Let ${(M, \sigma)}$ be a direct mechanism. For any ${x}$,

$\displaystyle u_i(x) = u_i(0) + \int_0^x q_i(t) \; dt.$

Proof: Fundamental theorem of calculus. $\Box$

Theorem 11 (Payment equivalence)

Let ${(M, \sigma)}$ be a direct mechanism. For any ${x}$,

$\displaystyle p_i(x) = p_i(0) + x \cdot q_i(x) - \int_0^x q_i(t) \; dt.$

Thus ${p}$ is determined by ${q}$ up to a constant shift.

Proof: Use ${u_i(x) = x \cdot q_i(x) - p_i(x)}$ and ${u_i(0) = - p_i(0)}$. $\Box$

This means that both functions ${p}$ and ${u}$ are completely determined, up to a constant shift, by the expected allocation ${q}$ in the equilibrium ${\sigma}$.

## 5. Revenue equivalence

A corollary:

Corollary 12 (Revenue equivalence)

Let ${M = (M, \sigma)}$ be a mechanism. Then the expected revenue of the auctioneer, namely,

$\displaystyle \mathbf E_{\vec x} \sum_{i=1}^N p_i(x_i)$

depends only on ${q_i}$ and ${p_i(0)}$.

Very often, textbooks will add the additional requirement that ${u_i(0) = 0}$ or ${p_i(0) = 0}$, in which case the statements become slightly simpler, as the constant shifts go away.

Here are the two important corollaries, which as far as I can tell are never both stated.

Corollary 13 (Revenue equivalence for incentive compatible mechanisms)

If ${M = (M, \mathrm{id})}$ is incentive compatible, then ${u_i(x)}$ and ${p_i(x)}$ (and hence the seller’s revenue) depends only on the allocation function ${Q}$ and distributions ${\mu}$, up to a constant shift.

Proof: In an incentive compatible situation where ${\sigma_i(x) = x}$ we have

$\displaystyle q_1(x) = \int_{x_2=0}^{1000} \dots \int_{x_N=0}^{1000} Q_1(x, x_2, \dots, x_N) \; d\mu_2 d\mu_3 \dots d\mu_N$

so ${q_1}$ depends only on ${Q}$ and ${\mu}$ up to a constant shift. Ditto for any other ${q_i}$. $\Box$

Corollary 14 (Revenue equivalence for ${Q_{\text{highest}}}$ auctions)

Suppose ${M = (M, \sigma)}$ is a mechanism in which

• The allocation function is ${Q = Q_{\text{highest}}}$,
• The ${\sigma_i(x)}$ is strictly increasing in ${x}$ for all ${i}$ (players with higher values bid more).

Then ${u(x)}$ and ${p(x)}$ are determined up to constant shifts by ${\mu}$.

Proof: By the assumption on ${\sigma}$ we have

$\displaystyle Q_{\text{highest}}( \sigma_1(x_1), \dots, \sigma_N(x_N) ) = Q_{\text{highest}}\left( x_1, \dots, x_N \right)$

so it follows for example that

\displaystyle \begin{aligned} q_1(x) &= \int_{x_2=0}^{1000} \dots \int_{x_N=0}^{1000} Q_1(\sigma_1(x), \sigma_2(x_2), \dots, \sigma_N(x_N)) \; d\mu_2 \dots d\mu_N \\ &= \int_{x_2=0}^{1000} \dots \int_{x_N=0}^{1000} Q_1(x, x_2, \dots, x_N) \; d\mu_2 \dots d\mu_N. \end{aligned}

Once again ${q_i}$ now depends only on ${\mu}$. $\Box$

As an application, we can actually use the revenue equivalence theorem to compute the equilibrium strategies of the first-price, second-price, and all-pay auctions with ${n \ge 2}$ players.

Corollary 15 (Nash equilibria of common ${Q_{\text{highest}}}$ auctions)

Suppose a player has value ${x \in [0,1000]}$ as in our setup, and that the prior ${\mu}$ is distributed uniformly. Each of the following is a Nash equilibrium:

• In a first-price auction, bid ${\frac{n}{n+1} x}$.
• In a second-price auction, bid ${x}$ (i.e. bid truthfully).
• In an all-pay auction, bid ${1000 \cdot \frac{n-1}{n} (x/1000)^n}$.

Proof: First, as we saw already the second-price auction has an equilibrium where everyone bids truthfully. In this case, the probability of winning is ${(x/1000)^{n-1}}$ and the expected payment when winning is ${\frac{n-1}{n} x}$ (this is the expected value of the largest of ${n-1}$ numbers in ${[0,x]}$.) Now by revenue equivalence, we have

$\displaystyle p_i^{\mathrm{all}}(x) = p_i^{\mathrm{I}}(x) = p_i^{\mathrm{II}}(x) = 1000 \cdot \frac{n-1}{n} \left( x/1000 \right)^n.$

Now we examine the all-pay and first-price auction.

• We have ${p_i^{\mathrm{all}}(x) = 1000 \cdot \frac{n-1}{n} \left( x/1000 \right)^n}$, i.e. in the equilibrium strategy for the all-pay auction, a player with type ${x}$ pays on average ${1000 \cdot \frac{n-1}{n} \left( x/1000 \right)^n}$. But the payment in an all-pay auction is always the bid! Hence conclusion.
• We have ${p_i^{\mathrm{I}}(x) = p_i^{\mathrm{II}}(x)}$, and since in both cases the chance of paying at all is ${(x/1000)^{n-1}}$, the payment if a player does win is ${\frac{n-1}{n} x}$; hence the equilibrium strategy is to bid ${\frac{n-1}{n}x}$.

$\Box$

## 6. Design feasibility

By now we have seen that all our mechanisms really depend mostly on the functions ${q_i}$, (from which we can then compute ${p_i}$ and ${u_i}$) while we have almost completely ignored the parameters ${P_i}$, ${Q_i}$, ${\sigma}$ which give rise to the mechanism ${(M, \sigma)}$ in the first place.

We would like to continue doing this, and therefore, we want a way to reverse the work earlier: given the functions ${q_i}$ construct a mechanism ${(M, \sigma)}$ with those particular expected allocations. The construction need not even be explicit; we will be content just knowing such a mechanisms exists.

Thus, the critical question is: which functions ${q_i}$ can actually arise? The answer is that the only real constraint is that ${q_i}$ are nondecreasing.

Theorem 16 (Feasability rule)

Consider ${N}$ players with a fixed a distribution ${\{\mu_i\}_{i=1}^N}$ of private values. Consider any measurable functions ${q_1, \dots, q_N : [0,1000] \rightarrow \mathbb R_{\ge 0}}$.

Then there exists a direct mechanism ${(M, \sigma)}$ with ${q_i}$ as the expected allocations if and only if

• each function ${q_i}$ is nondecreasing.
• ${q_1(x_1) + \dots + q_N(x_N) \le 1}$ with probability ${1}$.

Proof: First, we show that any ${q_i}$ arising from a direct mechanism are nondecreasing. Indeed, if ${x > y}$ then we have the inequalities

\displaystyle \begin{aligned} x \cdot q_i(x) - p_i(x) &\ge x \cdot q_i(y) - p_i(y) \\ y \cdot q_i(y) - p_i(y) &\ge y \cdot q_i(x) - p_i(x). \end{aligned}

If we add these inequalities we recover ${(x-y)q_i(x) \ge (x-y)q_i(y)}$, which shows that ${q_i}$ is nondecreasing. The second condition just reflects that ${\sum Q_i(\vec b) \le 1}$.

Conversely, suppose ${q_i}$ are given nondecreasing function. We will construct an incentive compatible mechanism inducing them. First, define ${p_i(x) = x \cdot q_i(x) - \int_0^x q_i(t) \; dt}$ as predicted by Revenue Equivalence. First, define ${M = (P,Q, \mu)}$ by

• ${P_i(x_1, \dots, x_N) = p_i(x_i)}$, and
• ${Q_i(x_1, \dots, x_N) = q_i(x_i)}$. (Note ${\sum_i Q_i(x_i) \le 1}$.)

Then trivially ${p_i^M = p_i}$ and ${q_i^M = q_i}$.

However, we haven’t specified a ${\sigma}$ yet! This is the hard part, but the crucial claim is that we can pick ${\sigma = \mathrm{id}}$: that is, ${(M, \mathrm{id})}$ is an incentive compatible direct mechanism.

Thus we need to check that for all ${x}$ and ${y}$ that

$\displaystyle u_i(x) \ge x \cdot q_i(y) - p_i(y)$

We just do ${x > y}$ since the other case is analogous; then the inequality we want to prove rearranges as

\displaystyle \begin{aligned} \iff u_i(x) - u_i(y) &\ge (x-y) \cdot q_i(y) \\ \iff \int_y^x q(t) \; dt &\ge \int_y^x q(y) \; dt \end{aligned}

Since ${q}$ is increasing, this is immediate. $\Box$

In particular, we can now state:

Corollary 17 (Individually Rational)

A direct mechanism ${(M, \sigma)}$ is voluntary if and only if

$\displaystyle u_i(0) \ge 0 \iff p_i(0) \le 0$

for every ${i}$.

Proof: Since ${u_i' = q_i \ge 0}$, the function ${u_i}$ is nonnegative everywhere if and only if ${u_i(0) \ge 0}$. $\Box$

## 7. Optimal auction

Since we’re talking about optimal auctions, we now restrict our attention to auctions in which ${p_i(0) = 0}$ for every ${i}$ (hence the auction is voluntary).

Now, we want to find the amount of money we expect to extract for player ${i}$. Let’s compute the expected revenue given a function ${q}$ and distribution ${\mu}$. Of course, we know that

$\displaystyle \mathbf E_{x_i}\left[ p_i(x_i) \right] = \int_{t=0}^{1000} p_i(t) \; d\mu_i$

but we also know by revenue equivalence that we want to instead use the function ${q_i}$, rather than the function ${p_i}$. So let’s instead use our theorem for the integral to compute it:

$\displaystyle \mathbf E_{x_i}\left[ p_i(x_i) \right] = p_i(0) + \mathbf E_{x_i}\left[ x_i \cdot q_i(x_i) \right] - \mathbf E_{x_i}\left[ \int_0^{x_i} q(t) \; dt \right]$

Applying the definition of expected value and switching the order of summation,

\displaystyle \begin{aligned} \mathbf E_{x_i}\left[ p_i(x_i) \right] &= p_i(0) + \int_{t=0}^{1000} t \cdot q_i(t) \; d\mu_i - \int_{x=0}^{1000} \int_0^{x} q(t) \; dt d\mu_i \\ &= \int_{t=0}^{1000} t \cdot q_i(t) \; d\mu_i - \int_{t=0}^{1000} q(t) \int_{x=t}^{1000} \int_0^{x} d\mu_i dt \\ &= \int_{t=0}^{1000} t \cdot q_i(t) \; d\mu_i - \int_{t=0}^{1000} q_i(t) \cdot \mu_i([t, 1000]) dt \\ &= \int_{t=0}^{1000} t \cdot q_i(t) \; d\mu_i - \int_{t=0}^{1000} q_i(t) \cdot \frac{1-\mu_i([0, t])} {\frac{d\mu_i}{dt}(t)} d\mu_i \\ &= \int_{t=0}^{1000} q_i(t) \left( t - \frac{1-\mu_i([0,t])} {\frac{d\mu_i}{dt}(t)} \right) d\mu_i \end{aligned}

Thus, we define for the player ${i}$ their virtual valuation to be the thing that we just obtained in this computation:

$\displaystyle \psi_i(t) = t - \frac{1-\mu_i([0,t])}{\frac{d\mu_i}{dt}(t)}.$

Thus

$\displaystyle \mathbf E_{x_i}\left[ p_i(x_i) \right] = \int_{t=0}^{1000} q_i(t) \cdot \psi_i(t) \; dt. \ \ \ \ \ (1)$

The virtual valuation ${\psi_i(t)}$ can be thought of as the expected value of the amount of money we extract if we give the object to player ${i}$ when she has type ${t}$. Note it has the very important property of not depending on ${q_i}$.

Now, let’s observe that

\displaystyle \begin{aligned} \sum_{i=1}^N \mathbf E_{x_i} \left[ p_i(x_i) \right] &= \sum_{i=1}^N \int_{t=0}^{1000} q_i(t) \psi_i(t) \; d\mu_i \\ &= \sum_{i=1}^N \int_{x_1=0}^{1000} \dots \int_{x_N=0}^{1000} Q_i(\vec x) \psi(x_i) \; d\mu_1 d\mu_2 \dots d\mu_N \\ &= \int_{x_1=0}^{1000} \dots \int_{x_N=0}^{1000} \sum_{i=1}^N \left( Q_i(\vec x) \psi(x_i) \right) \; d\mu_1 d\mu_2 \dots d\mu_N. \end{aligned}

Now, our goal is to select the function ${Q}$ to maximize this. Consider a particular point ${\vec x = (x_1, \dots, x_N)}$. Then we’re trying to pick ${Q_1(\vec x)}$, ${Q_2(\vec x)}$, \dots, ${Q_n(\vec x)}$ to maximize the value of

$\displaystyle Q_1(\vec x) \psi_1(x_1) + Q_2(\vec x) \psi_2(x_2) + \dots + Q_n(\vec x) \psi_n(x_n)$

subject to ${\sum Q_i(\vec x) \le 1}$. The ${\psi_i}$‘s here depend only on the distribution ${\mu}$. This is thus a convex optimization problem, so the solution is obvious: put all the weight on the biggest positive ${\psi_i(x_i)}$, breaking ties arbitrarily. In other words, if ${k = \text{argmax }_i \psi_i(x_i)}$ and ${\psi_k(x_k) > 0}$, then set ${Q_k(\vec x) = 1}$ and all the other coefficients to be zero. (Ties broken arbitrarily as usual.) To be explicit, we set

$\displaystyle Q_k(\vec x) = \begin{cases} 1 & \psi_k(x_k) \ge 0 \text{ is maximal (ties broken arbitrarily)} \\ 0 & \text{else}. \end{cases}$

So we should think of this as a second-price auction with discriminatory reserve prices. Moreover, the “second-price” payment is done based on ${\psi_i}$, so rather than “pay second highest bid” we instead have “pay smallest amount needed to win”, i.e. the smallest ${y}$ such that ${\psi_k(y) \ge \psi_j(x_j)}$ for all ${j \neq k}$.

Unfortunately, since the ${\mu_i}$ are arbitrary the resulting ${Q_i}$ might be strange enough that the game fails to have a reasonable strategy ${\sigma}$; in other words we’re worried the maximum might not be achievable. The easiest thing to do is write down a condition to handle this:

Definition 18

We say ${\mu = \{\mu_i\}_{i=1}^N}$ is regular if the virtual valuations ${\psi_i : [0,1000] \rightarrow \mathbb R}$ are strictly increasing for all ${i}$.

Theorem 19 (Regularity implies optimal auction is achievable)

Assume regularity of ${\mu}$. Consider a second-price auction with discriminatory reserve prices: the reserve price for player ${i}$ is the smallest ${x}$ such that ${\psi_i(x) > 0}$, and the winner ${k}$ pays the smallest amount needed to win.

This is an incentive compatible mechanism which maximizes the expected revenue of the auctioneer.

Proof: The ${Q}$ described in the theorem is the one we mentioned earlier. The hypothesis defines ${P}$ as follows:

• If ${\psi_k(x_k) > 0}$ is maximal, then that player wins and pays the smallest amount ${y}$ such that ${\psi_k(y)}$ still exceeds all other ${\psi_j(x_j)}$.
• Otherwise, the item is not sold.

The fact that this mechanism ${M = (P,Q,\mu)}$ is incentive compatible is more or less the same as before (bidding truthfully is a weakly dominant strategy). Moreover we already argued above that this allocation ${Q}$ maximizes revenue. $\Box$

You can and should think of this as a “reserve price second price auction” except with virtual valuations instead. The winner is the player with the highest virtual valuation, who is then allowed to retroactively change their bid so long as they still win.

To see this in action:

Corollary 20 (Optimal symmetric uniform auction)

Consider an auction in which ${n}$ players have uniform values in ${[0,1000]}$. Then the optimal auction is a second-price auction with reserve price ${\500}$. The expected earning of the auctioneer is

$\displaystyle 1000 \cdot \frac{n - 1 + \left( \frac{1}{2} \right)^n}{n+1}.$

Proof: We compute each ${\psi_i}$ as

$\displaystyle \psi_i(x) = x - \frac{1 - \mu_i([0,x])}{\frac{d\mu_i}{dt}(x)} = x - \frac{1-x/1000}{1/1000} = 2x - 1000.$

Hence, we usually want to award the item to the player with the largest virtual valuation (i.e. the highest bidder), setting the reserve price at ${500}$ for everyone since ${\psi_i(500) = 0}$. By (1) the expected payment from a player equals

$\displaystyle \frac{1}{1000} \int_{x=0}^{1000} \psi_i(x) \cdot q_i(x) \; dx = \frac{1}{1000}\int_{x=500}^{1000} (2x-1000) \cdot (x/1000)^{n-1} \; dx$

Simplifying and multiplying by ${n}$ gives the answer. $\Box$

More generally, in an asymmetric situation the optimal reserve prices are discriminatory and vary from player to player. As a nice exercise to get used to this:

Exercise 21

Find the optimal auction mechanism if two players are bidding, with one player having value distributed uniformly in ${[0,500]}$ and the other player having value distributed uniformly in ${[0,1000]}$.

## 8. Against revelation

Here’s a pedagogical point: almost all sources state early on the so-called “revelation principle”.

Proposition 22 (Revelation principle)

Let ${M = (M, \sigma)}$ be a direct mechanism. Then there exists a incentive compatible mechanism ${M^\ast}$ such that

• ${M^\ast = (M^\ast, \mathrm{id})}$ is incentive compatible, and
• For any ${i}$, ${p_i^\ast = p_i}$, ${q_i^\ast = q_i}$, ${u_i^\ast = u_i}$, i.e. at the equilibrium point, the expected payoffs and allocations don’t change.

Proof: Consider ${M = (M, \sigma)}$. Then ${M^\ast}$ is played as follows:

• Each player ${i}$ has an advisor to whom it tells their value ${x_i}$
• The advisor and figures out the optimal bid ${\sigma_i(x_i)}$ in ${M}$, and submits this bid on the player’s behalf.
• The game ${M}$ is played with the advisor’s bids, and the payments / allocations are given to corresponding players.

Clearly the player should be truthful to their advisor. $\Box$

Most sources go on to say that this makes their life easier, because now they can instead say “it suffices to study incentive compatible mechanisms”. For example, one can use this to

• Replace “direct mechanism” with “incentive-compatible direct mechanism”.
• Replace “there exists a direct mechanism ${(M, \sigma)}$” with “there exists a ${(P,Q)}$ which is incentive compatible”.

However, I personally think this is awful. Here is why.

The proof of Revelation is almost tautological. Philosophically, it says that you should define the functions ${q_i}$, ${p_i}$, ${u_i}$ in terms of the equilibrium ${\sigma}$. Authors which restrict to incentive compatible mechanisms are hiding this fact behind Revelation: now to the reader it’s no longer clear whether ${u_i}$ should be interpreted as taking a bid or value as input.

Put another way, the concept of a bid/value should be kept segregated. That’s why I use ${x_i}$ only for values, ${b_i}$ only for bids, and build in the equilibrium ${\sigma}$ as into ${u_i}$, ${p_i}$, ${q_i}$; So ${u_i(x)}$ is the expected utility of bidding ${\sigma(x_i)}$ in the equilibrium. Revelation does the exact opposite: it lets authors promiscuously mix the concepts of values and bids, and pushes ${\sigma}$ out of the picture altogether.

You can use a wide range of wild, cultivated or supermarket greens in this recipe. Consider nettles, beet tops, turnip tops, spinach, or watercress in place of chard. The combination is also up to you so choose the ones you like most.

— Y. Ottolenghi. Plenty More

In this post I’ll describe how I come up with geometry proposals for olympiad-style contests. In particular, I’ll go into detail about how I created the following two problems, which were the first olympiad problems which I got onto a contest. Note that I don’t claim this is the only way to write such problems, it just happens to be the approach I use, and has consistently gotten me reasonably good results.

[USA December TST for 56th IMO] Let ${ABC}$ be a triangle with incenter ${I}$ whose incircle is tangent to ${\overline{BC}}$, ${\overline{CA}}$, ${\overline{AB}}$ at ${D}$, ${E}$, ${F}$, respectively. Denote by ${M}$ the midpoint of ${\overline{BC}}$ and let ${P}$ be a point in the interior of ${\triangle ABC}$ so that ${MD = MP}$ and ${\angle PAB = \angle PAC}$. Let ${Q}$ be a point on the incircle such that ${\angle AQD = 90^{\circ}}$. Prove that either ${\angle PQE = 90^{\circ}}$ or ${\angle PQF = 90^{\circ}}$.

[Taiwan TST Quiz for 56th IMO] In scalene triangle ${ABC}$ with incenter ${I}$, the incircle is tangent to sides ${CA}$ and ${AB}$ at points ${E}$ and ${F}$. The tangents to the circumcircle of ${\triangle AEF}$ at ${E}$ and ${F}$ meet at ${S}$. Lines ${EF}$ and ${BC}$ intersect at ${T}$. Prove that the circle with diameter ${ST}$ is orthogonal to the nine-point circle of ${\triangle BIC}$.

## 1. General Procedure

Here are the main ingredients you’ll need.

• The ability to consistently solve medium to hard olympiad geometry problems. The intuition you have from being a contestant proves valuable when you go about looking for things.
• In particular, a good eye: in an accurate diagram, you should be able to notice if three points look collinear or if four points are concyclic, and so on. Fortunately, this is something you’ll hopefully have just from having done enough olympiad problems.
• Geogebra, or some other software that will let you quickly draw and edit diagrams.

With that in mind, here’s the gist of what you do.

1. Start with a configuration of your choice; something that has a bit of nontrivial structure in it, and add something more to it. For example, you might draw a triangle with its incircle and then add in the excircle tangency point, and the circle centered at ${BC}$ passing through both points (taking advantage of the fact that the two tangency points are equidistant from ${B}$ and ${C}$).
2. Start playing around, adding in points and so on to see if anything interesting happens. You might be guided by some actual geometry constructions: for example, if you know that the starting configuration has a harmonic bundle in it, you might project this bundle to obtain the new points to play with.
3. Keep going with this until you find something unexpected: three points are collinear, four points are cyclic, or so on. Perturb the diagram to make sure your conjecture looks like it’s true in all cases.
4. Figure out why this coincidence happened. This will probably add more points to you figure, since you often need to construct more auxiliary points to prove the conjecture that you have found.
5. Repeat the previous two steps to your satisfaction.
6. Once you are happy with what you have, you have a nontrivial statement and probably several things that are equivalent to it. Pick the one that is most elegant (or hardest), and erase auxiliary points you added that are not needed for the problem statement.
7. Look for other ways to reduce the number of points even further, by finding other equivalent formulations that have fewer points.

Or shorter yet: build up, then tear down.

None of this makes sense written this abstractly, so now let me walk you through the two problems I wrote.

## 2. The December TST Problem

In this narrative, the point names might be a little strange at first, because (to make the story follow-able) I used the point names that ended up in the final problem, rather than ones I initially gave. Please bear with me!

I began by drawing a triangle ${ABC}$ (always a good start\dots) and its incircle, tangent to side ${BC}$ at ${D}$. Then, I added in the excircle touch point ${T}$, and drew in the circle with diameter ${DT}$, which was centered at the midpoint ${M}$. This was a coy way of using the fact that ${MD = MT}$; I wanted to see whether it would give me anything interesting.

So, I now had the following picture.

Now I had two circles intersecting at a single point ${D}$, so I added in ${Q}$, the second intersection. But really, this point ${Q}$ can be thought of another way. If we let ${DS}$ be the diameter of the incircle, then as ${DT}$ is the other diameter, ${Q}$ is actually just the foot of the altitude from ${D}$ to line ${ST}$.

But recall that ${A}$, ${S}$, ${T}$ are collinear! (Again, this is why it’s helpful to be familiar with “standard” contest configurations; you see these kind of things immediately.) So ${Q}$ in fact lies on line ${AT}$.

This was pretty cool, though not yet interesting enough to be a contest problem. So I looked for most things that might be true.

I don’t remember what I tried next; it didn’t do anything interesting. But I do remember the thing I tried after that: I drew in the angle bisector, line ${AI}$. And then, I noticed a big coincidence: the first intersection of ${AI}$ with the circle with diameter ${DT}$ seemed to lie on line ${DE}$! I was initially confused by this; it didn’t seem like it could possibly be true due to symmetry reasons. But in my diagram, it was indeed correct. A moment later, I realized the reason why this was plausible: in fact, the second intersection of line ${AI}$ with the circle was on line ${DF}$.

Now, I could not see quickly at all why this was true. So I started trying to prove it, but initially failed: however, I managed to show (via angle chasing) that

$\displaystyle D, P, E \text{ collinear} \iff \angle PQE = 90^\circ.$

So, at least I had an interesting equivalent statement.

After another half hour of trying to prove my conjecture, I finally realized what was happening. The point ${P}$ was the one attached to a particular lemma: the ${A}$-bisector, ${B}$-midline, and ${C}$ touch-chord are concurrent, and from this ${MD = MP}$ just follows by some similar triangles. So, drawing in the point ${N}$ (the midpoint of ${AB}$), I had the full configuration which gave the answer to my conjecture.

Finally, I had to clean up the mess that I had made. How could I do this? Well, the points ${N}$, ${S}$ could be eliminated easily enough. And we could re-define ${Q}$ to be a point on the incircle such that ${\angle AQD = 90^\circ}$. This actually eliminated the green circle and point ${T}$ altogether, provided we defined ${P}$ by just saying that it was on the angle bisector, and that ${MD = MP}$. (So while the circle was still implicit in the condition ${MD = MP}$, it was no longer explicitly part of the problem.)

Finally, we could even remove the line through ${D}$, ${P}$ and ${E}$; we ask the contestant to prove ${\angle PQE = 90^\circ}$.

And that was it!

## 3. The Taiwan TST Problem

In fact, the starting point of this problem was the same lemma which provided the key to the previous solution: the circle with diameter ${BC}$ intersects the ${B}$ and ${C}$ bisectors on the ${A}$ touch chord. Thus, we had the following diagram.

The main idea I had was to look at the points ${D}$, ${X}$, ${Y}$ in conjunction with each other. Specifically, this was the orthic triangle of ${\triangle BIC}$, a situation which I had remembered from working on Iran TST 2009, Problem 9. So, I decided to see what would happen if I drew in the nine-point circle of ${\triangle BIC}$. Naturally, this induces the midpoint ${M}$ of ${BC}$.

At this point, notice (or recall!) that line ${AM}$ is concurrent with lines ${DI}$ and ${EF}$.

So the nine-point circle of the problem is very tied down to the triangle ${BIC}$. Now, since I was in the mood for something projective, I constructed the point ${T}$, the intersection of lines ${EF}$ and ${BC}$. In fact, what I was trying to do was take perspectivity through ${I}$. From this we actually deduce that ${(T,K;X,Y)}$ is a harmonic bundle.

Now, what could I do with this picture? I played around looking for some coincidences, but none immediately presented themselves. But I was enticed by the point ${T}$, which was somehow related to the cyclic complete quadrilateral ${XYMD}$. So, I went ahead and constructed the pole of ${T}$ to the nine-point circle, letting it hit line ${BC}$ at ${L}$. This was aimed at “completing” the picture of a cyclic quadrilateral and the pole of an intersection of two sides. In particular, ${(T,L;D,M)}$ was harmonic too.

I spent a long time thinking about how I could make this into a problem. I unfortunately don’t remember exactly what things I tried, other than the fact that I was taking a lot of perspectivity. In particular, the “busiest” point in the picture is ${K}$, so it makes sense to try and take perspectives through it. Especially enticing was the harmonic bundle

$\displaystyle \left( \overline{KT}, \overline{KL}; \overline{KD}, \overline{KM} \right) = -1.$

How could I use this to get a nice result?

Finally about half an hour I got the right idea. We could take this bundle and intersect it with the ray ${AI}$! Now, letting ${N}$ be the midpoint ${EF}$, we find that three of the points in the harmonic bundle we obtain are ${A}$, ${I}$, and ${N}$; let ${S}$ be the fourth point, which is the intersection of line ${KL}$ with ${AI}$. Then by hypothesis, we ought to have ${(A,I;N,S) = -1}$. But from this we know exactly what the point ${S}$. Just look at the circumcircle of triangle ${AEF}$: as this has diameter ${AI}$, we see that ${S}$ is the intersection of the tangents at ${E}$ and ${F}$.

Consequently, we know that the point ${S}$, defined very naturally in terms of the original picture, lies on the polar of ${T}$ to the nine-point circle. By simply asking the contestant to prove this, we thus eliminate all the points ${K}$, ${M}$, ${D}$, ${N}$, ${I}$, ${X}$, and ${Y}$ completely from the picture, leaving only the nine-point circle. Finally, instead of directly asking the contestant to show that ${T}$ lies on the polar of ${S}$, one can rephrase the problem as saying “the circle with diameter ${ST}$ is orthogonal to the nine-point circle of ${\triangle BIC}$”, concealing all the work that went into the creation of the problem.

Fantastic.

# The Mixtilinear Incircle

This blog post corresponds to my newest olympiad handout on mixtilinear incircles.

My favorite circle associated to a triangle is the ${A}$-mixtilinear incircle. While it rarely shows up on olympiads, it is one of the richest configurations I have seen, with many unexpected coincidences showing up, and I would be overjoyed if they become fashionable within the coming years.

Here’s the picture:

The points ${D}$ and ${E}$ are the contact points of the incircle and ${A}$-excircle on the side ${BC}$. Points ${M_A}$, ${M_B}$, ${M_C}$ are the midpoints of the arcs.

As a challenge to my recent USAMO class (I taught at A* Summer Camp this year), I asked them to find as many “coincidences” in the picture as I could (just to illustrate the richness of the configuration). I invite you to do the same with the picture above.

The results of this exercise were somewhat surprising. Firstly, I found out that students without significant olympiad experience can’t “see” cyclic quadrilaterals in a picture. Through lots of training I’ve gained the ability to notice, with some accuracy, when four points in a diagram are concyclic. This has taken me a long way both in setting problems and solving them. (Aside: I wonder if it might be possible to train this skill by e.g. designing an “eyeballing” game with real olympiad problems. I would totally like to make this happen.)

The other two things that happened: one, I discovered one new property while preparing the handout, and two, a student found yet another property which I hadn’t known to be true before. In any case, I ended up covering the board in plenty of ink.

Here’s the list of properties I have.

1. First, the classic: by Pascal’s Theorem on ${TM_CCABM_B}$, we find that points ${B_1}$, ${I}$, ${C}$ are collinear; hence the contact chord of the ${A}$-mixtilinear incircle passes through the incenter. The special case of this problem with ${AB = AC}$ appeared in IMO 1978.
• Then, by Pascal on ${BCM_CTM_AA}$, we discover that lines ${BC}$, ${B_1C_1}$, and ${TM_A}$ are also concurrent.
• This also lets us establish (by angle chasing) that ${BB_1IT}$ and ${CC_1IT}$ are concyclic. In addition, lines ${BM_B}$ and ${CM_C}$ are tangents to these circumcircles at ${I}$ (again by angle chasing).
2. An Iran 2002 problem asks to show that ray ${TI}$ passes through the point diametrically opposite ${M_A}$ on the circumcircle. This is solved by noticing that ${TA}$ is a symmedian of the triangle ${TB_1C_1}$ and (by the previous fact) that ${TI}$ is a median. This is the key lemma in Taiwan TST 2014, Problem 3, which is one of my favorite problems (a nice result by Cosmin Pohoatza).
3. Lines ${AT}$ and ${AE}$ are isogonal. This was essentially EGMO 2012, Problem 5, and the “morally correct” solution is to do an inversion at ${A}$ followed by a reflection along the ${\angle A}$-bisector (sometimes we call this a “${\sqrt{bc}}$ inversion”).
• As a consequence of this, one can also show that lines ${TA}$ and ${TD}$ are isogonal (with respect to ${\angle BTC}$).
• One can also deduce from this that the circumcircle of ${\triangle TDM_A}$ passes through the intersection of ${BC}$ and ${AM_A}$.
4. Lines ${AD}$ and ${TM_A}$ meet on the mixtilinear incircle. (Homothety!)
5. Moreover, line ${AT}$ passes through the exsimilicenter of the incircle and circumcircle, by, say Monge d’Alembert. Said another way, the mentioned exsimilicenter is the isogonal conjugate of the Nagel point.

To put that all into one picture: