Epistemic status: highly dubious. I found almost no literature doing anything quite like what follows, which unsettles me because it makes it likely that I’m overcomplicating things significantly.
Recently I was working on an elegant problem which was the original problem 6 for the 2015 International Math Olympiad, which reads as follows:
[IMO Shortlist 2015 Problem C6] Let be a nonempty set of positive integers. We say that a positive integer is clean if it has a unique representation as a sum of an odd number of distinct elements from . Prove that there exist infinitely many positive integers that are not clean.
Proceeding by contradiction, one can prove (try it!) that in fact all sufficiently large integers have exactly one representation as a sum of an even subset of . Then, the problem reduces to the following:
Show that if is an increasing sequence of positive integers and is a nonzero polynomial then we cannot have
as formal series.
To see this, note that all sufficiently large have coefficient . Now, the intuitive idea is obvious: the root appears with finite multiplicity in so we can put where , and then we get that on the RHS divides too many times, right?
Well, there are some obvious issues with this “proof”: for example, consider the equality
The right-hand side is “divisible” by , but the left-hand side is not (as a polynomial).
But we still want to use the idea of plugging , so what is the right thing to do? It turns out that this is a complete minefield, and there are a lot of very subtle distinctions that seem to not be explicitly mentioned in many places. I think I have a complete answer now, but it’s long enough to warrant this entire blog post.
Here’s the short version: there’s actually two distinct notions of “generating function”, namely a “formal series” and “functional series”. They use exactly the same notation but are two different types of objects, and this ends up being the source of lots of errors, because “formal series” do not allow substituting , while “functional series” do.
Spoiler: we’ll need the asymptotic for the partition function .
2. Formal Series Functional Series
I’m assuming you’ve all heard the definition of . It turns out unfortunately that this isn’t everything: there are actually two types of objects at play here. They are usually called formal power series and power series, but for this post I will use the more descriptive names formal series and functional series. I’ll do everything over , but one can of course use instead.
The formal series is easier to describe:
A formal series is an infinite sequence of complex numbers. We often denote it by . The set of formal series is denoted .
This is the “algebraic” viewpoint: it’s a sequence of coefficients. Note that there is no worry about convergence issues or “plugging in ”.
On the other hand, a functional series is more involved, because it has to support substitution of values of and worry about convergence issues. So here are the necessary pieces of data:
Thus formal and functional series, despite having the same notation, have different types: a formal series is a sequence, while a functional series is a function that happens to be expressible as an infinite sum within its domain.
Of course, from every functional series we can extract its coefficients and make them into a formal series . So, for lack of better notation:
If is a formal series, and is a functional series whose coefficients equal , then we write .
3. Finite operations
Now that we have formal and functional series, we can define sums. Since these are different types of objects, we will have to run definitions in parallel and then ideally check that they respect .
For formal series:
Let and be formal series. Then we set
This makes into a ring, with identity and .
We also define the derivative by .
It’s probably more intuitive to write these definitions as
and in what follows I’ll start to use more. But officially, all definitions for formal series are in terms of the coefficients alone; these presence of serves as motivation only.
Show that if is a formal series, then it has a multiplicative inverse if and only if .
On the other hand, with functional series, the above operations are even simpler:
Now, for these finite operations, everything works as you expect:
So far so good: as long as we’re doing finite operations. But once we step beyond that, things begin to go haywire.
We need to start considering limits of and , since we are trying to make progress towards infinite sums and products. Once we do this, things start to burn.
Let and be formal series, and define the difference by
This function makes into a metric space, so we can discuss limits in this space. Actually, it is a normed vector space obtained by above.
Thus, if each coefficient of eventually stabilizes as . For example, as formal series we have that , , converges to , which we write as
As for functional series, since they are functions on the same open set , we can use pointwise convergence or the stronger uniform convergence; we’ll say explicitly which one we’re doing.
Example 10 (Limits don’t work at all)
In what follows, for every .
- Here is an example showing that if , the functions may not converge even pointwise. Indeed, just take as before, and let .
- Here is an example showing that even if uniformly, may not exist. Take as constant functions. Then , but doesn’t exist because the constant term never stabilizes (in the combinatorial sense).
- The following example from this math.SE answer by Robert Israel shows that it’s possible that exists, and pointwise, and still . Let be the open unit disk, and set
for . By Runge theorem there’s a polynomial such that
is the desired counterexample (with being the sequence of coefficients from ). Indeed by construction , since for each . Alas, for , so converges pointwise to the identity function.
To be fair, we do have the following saving grace:
Proof: Here is a proof, copied from this math.SE answer by Joey Zhou. WLOG , and let . It suffices to show that for all . Choose any . By Cauchy’s integral formula, we have
since converges uniformly to on . Hence, . Since for , the result follows.
The take-away from this section is that limits are relatively poorly behaved.
5. Infinite sums and products
Naturally, infinite sums and products are defined by taking the limit of partial sums and limits. The following example (from math.SE again) shows the nuances of this behavior.
Example 12 (On )
does not make sense as a formal series: we observe that for every the constant term of the partial sum changes.
But this does converge (uniformly, even) to a functional series on , namely to .
Now the upshot is that one example of a convergent formal sum is the expression itself! This means we can use standard “radius of convergence” arguments to transfer a formal series into functional one.
Proof: Let and be the corresponding partial sums of to . Then by Cauchy-Hadamard theorem, we have uniformly on (compact subsets of) . Also, by construction.
This works less well with products: for example we have
as formal series, but we can’t “plug in ”, for example,
6. Finishing the original problem
We finally return to the original problem: we wish to show that the equality
cannot hold as formal series. We know that tacitly, this just means
as formal series.
Here is a solution obtained only by only considering coefficients, presented by Qiaochu Yuan from this MathOverflow question.
Both sides have constant coefficient , so we may invert them; thus it suffices to show we cannot have
as formal power series.
The coefficients on the LHS have asymptotic growth a polynomial times an exponential.
On the other hand, the coefficients of the RHS can be shown to have growth both strictly larger than any polynomial (by truncating the product) and strictly smaller than any exponential (by comparing to the growth rate in the case where , which gives the partition function mentioned before). So the two rates of growth can’t match.