Things Fourier

For some reason several classes at MIT this year involve Fourier analysis. I was always confused about this as a high schooler, because no one ever gave me the “orthonormal basis” explanation, so here goes. As a bonus, I also prove a form of Arrow’s Impossibility Theorem using binary Fourier analysis, and then talk about the fancier generalizations using Pontryagin duality and the Peter-Weyl theorem.

In what follows, we let {\mathbb T = \mathbb R/\mathbb Z} denote the “circle group”, thought of as the additive group of “real numbers modulo {1}”. There is a canonical map {e : \mathbb T \rightarrow \mathbb C} sending {\mathbb T} to the complex unit circle, given by {e(\theta) = \exp(2\pi i \theta)}.

Disclaimer: I will deliberately be sloppy with convergence issues, in part because I don’t fully understand them myself, and in part because I don’t care.

1. Synopsis

Suppose we have a domain {Z} and are interested in functions {f : Z \rightarrow \mathbb C}. Naturally, the set of such functions form a complex vector space. We like to equip the set of such functions with an positive definite inner product. The idea of Fourier analysis is to then select an orthonormal basis for this set of functions, say {(e_\xi)_{\xi}}, which we call the characters; the indexing {\xi} are called frequencies. In that case, since we have a basis, every function {f : Z \rightarrow \mathbb C} becomes a sum

\displaystyle  f(x) = \sum_{\xi} \widehat f(\xi) e_\xi

where {\widehat f(\xi)} are complex coefficients of the basis; appropriately we call {\widehat f} the Fourier coefficients. The variable {x \in Z} is referred to as the physical variable. This is generally good because the characters are deliberately chosen to be nice “symmetric” functions, like sine or cosine waves or other periodic functions. Thus {we} decompose an arbitrarily complicated function into a sum on nice ones.

For convenience, we record a few facts about orthonormal bases.

Proposition 1 (Facts about orthonormal bases)

Let {V} be a complex Hilbert space with inner form {\left< -,-\right>} and suppose {x = \sum_\xi a_\xi e_\xi} and {y = \sum_\xi b_\xi e_\xi} where {e_\xi} are an orthonormal basis. Then

\displaystyle  \begin{aligned} \left< x,x \right> &= \sum_\xi |a_\xi|^2 \\ a_\xi &= \left< x, e_\xi \right> \\ \left< x,y \right> &= \sum_\xi a_\xi \overline{b_\xi}. \end{aligned}

2. Common Examples

2.1. Binary Fourier analysis on {\{\pm1\}^n}

Let {Z = \{\pm 1\}^n} for some positive integer {n}, so we are considering functions {f(x_1, \dots, x_n)} accepting binary values. Then the functions {Z \rightarrow \mathbb C} form a {2^n}-dimensional vector space {\mathbb C^Z}, and we endow it with the inner form

\displaystyle  \left< f,g \right> = \frac{1}{2^n} \sum_{x \in Z} f(x) \overline{g(x)}.

In particular,

\displaystyle  \left< f,f \right> = \frac{1}{2^n} \sum_{x \in Z} \left\lvert f(x) \right\rvert^2

is the average of the squares; this establishes also that {\left< -,-\right>} is positive definite.

In that case, the multilinear polynomials form a basis of {\mathbb C^Z}, that is the polynomials

\displaystyle  \chi_S(x_1, \dots, x_n) = \prod_{s \in S} x_s.

Thus our frequency set is actually the subsets {S \subseteq \{1, \dots, n\}}. Thus, we have a decomposition

\displaystyle  f = \sum_{S \subseteq \{1, \dots, n\}} \widehat f(S) \chi_S.

Example 2 (An example of binary Fourier analysis)

Let {n = 2}. Then binary functions {\{ \pm 1\}^2 \rightarrow \mathbb C} have a basis given by the four polynomials

\displaystyle  1, \quad x_1, \quad x_2, \quad x_1x_2.

For example, consider the function {f} which is {1} at {(1,1)} and {0} elsewhere. Then we can put

\displaystyle  f(x_1, x_2) = \frac{x_1+1}{2} \cdot \frac{x_2+1}{2} = \frac14 \left( 1 + x_1 + x_2 + x_1x_2 \right).

So the Fourier coefficients are {\widehat f(S) = \frac 14} for each of the four {S}‘s.

This notion is useful in particular for binary functions {f : \{\pm1\}^n \rightarrow \{\pm1\}}; for these functions (and products thereof), we always have {\left< f,f \right> = 1}.

It is worth noting that the frequency {\varnothing} plays a special role:

Exercise 3

Show that

\displaystyle  \widehat f(\varnothing) = \frac{1}{|Z|} \sum_{x \in Z} f(x).

2.2. Fourier analysis on finite groups {Z}

This is the Fourier analysis used in this post and this post. Here, we have a finite abelian group {Z}, and consider functions {Z \rightarrow \mathbb C}; this is a {|Z|}-dimensional vector space. The inner product is the same as before:

\displaystyle  \left< f,g \right> = \frac{1}{|Z|} \sum_{x \in Z} f(x) \overline{g}(x).

Now here is how we generate the characters. We equip {Z} with a non-degenerate symmetric bilinear form

\displaystyle  Z \times Z \xrightarrow{\cdot} \mathbb T \qquad (\xi, x) \mapsto \xi \cdot x.

Experts may already recognize this as a choice of isomorphism between {Z} and its Pontryagin dual. This time the characters are given by

\displaystyle  \left( e_\xi \right)_{\xi \in Z} \qquad \text{where} \qquad e_\xi(x) = e(\xi \cdot x).

In this way, the set of frequencies is also {Z}, but the {\xi \in Z} play very different roles from the “physical” {x \in Z}. (It is not too hard to check these indeed form an orthonormal basis in the function space {\mathbb C^{\left\lvert Z \right\rvert}}, since we assumed that {\cdot} is non-degenerate.)

Example 4 (Cube roots of unity filter)

Suppose {Z = \mathbb Z/3\mathbb Z}, with the inner form given by {\xi \cdot x = (\xi x)/3}. Let {\omega = \exp(\frac 23 \pi i)} be a primitive cube root of unity. Note that

\displaystyle  e_\xi(x) = \begin{cases} 1 & \xi = 0 \\ \omega^x & \xi = 1 \\ \omega^{2x} & \xi = 2. \end{cases}

Then given {f : Z \rightarrow \mathbb C} with {f(0) = a}, {f(1) = b}, {f(2) = c}, we obtain

\displaystyle  f(x) = \frac{a+b+c}{3} \cdot 1 + \frac{a + \omega^2 b + \omega c}{3} \cdot \omega^x + \frac{a + \omega b + \omega^2 c}{3} \cdot \omega^{2x}.

In this way we derive that the transforms are

\displaystyle  \begin{aligned} \widehat f(0) &= \frac{a+b+c}{3} \\ \widehat f(1) &= \frac{a+\omega^2 b+ \omega c}{3} \\ \widehat f(2) &= \frac{a+\omega b+\omega^2c}{3}. \end{aligned}

Exercise 5

Show that

\displaystyle  \widehat f(0) = \frac{1}{|Z|} \sum_{x \in Z} f(x).

Olympiad contestants may recognize the previous example as a “roots of unity filter”, which is exactly the point. For concreteness, suppose one wants to compute

\displaystyle  \binom{1000}{0} + \binom{1000}{3} + \dots + \binom{1000}{999}.

In that case, we can consider the function

\displaystyle  w : \mathbb Z/3 \rightarrow \mathbb C.

such that {w(0) = 1} but {w(1) = w(2) = 0}. By abuse of notation we will also think of {w} as a function {w : \mathbb Z \twoheadrightarrow \mathbb Z/3 \rightarrow \mathbb C}. Then the sum in question is

\displaystyle  \begin{aligned} \sum_n \binom{1000}{n} w(n) &= \sum_n \binom{1000}{n} \sum_{k=0,1,2} \widehat w(k) \omega^{kn} \\ &= \sum_{k=0,1,2} \widehat w(k) \sum_n \binom{1000}{n} \omega^{kn} \\ &= \sum_{k=0,1,2} \widehat w(k) (1+\omega^k)^n. \end{aligned}

In our situation, we have {\widehat w(0) = \widehat w(1) = \widehat w(2) = \frac13}, and we have evaluated the desired sum. More generally, we can take any periodic weight {w} and use Fourier analysis in order to interchange the order of summation.

Example 6 (Binary Fourier analysis)

Suppose {Z = \{\pm 1\}^n}, viewed as an abelian group under pointwise multiplication hence isomorphic to {(\mathbb Z/2\mathbb Z)^{\oplus n}}. Assume we pick the dot product defined by

\displaystyle  \xi \cdot x = \frac{1}{2} \sum_i \xi_i x_i

where {\xi = (\xi_1, \dots, \xi_n)} and {x = (x_1, \dots, x_n)}.

We claim this coincides with the first example we gave. Indeed, let {S \subseteq \{1, \dots, n\}} and let {\xi \in \{\pm1\}^n} which is {-1} at positions in {S}, and {+1} at positions not in {S}. Then the character {\chi_S} form the previous example coincides with the character {e_\xi} in the new notation. In particular, {\widehat f(S) = \widehat f(\xi)}.

Thus Fourier analysis on a finite group {Z} subsumes binary Fourier analysis.

2.3. Fourier series for functions {L^2([-\pi, \pi])}

Now we consider the space {L^2([-\pi, \pi])} of square-integrable functions {[-\pi, \pi] \rightarrow \mathbb C}, with inner form

\displaystyle  \left< f,g \right> = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x) \overline{g(x)}.

Sadly, this is not a finite-dimensional vector space, but fortunately it is a Hilbert space so we are still fine. In this case, an orthonormal basis must allow infinite linear combinations, as long as the sum of squares is finite.

Now, it turns out in this case that

\displaystyle  (e_n)_{n \in \mathbb Z} \qquad\text{where}\qquad e_n(x) = \exp(inx)

is an orthonormal basis for {L^2([-\pi, \pi])}. Thus this time the frequency set {\mathbb Z} is infinite. So every function {f \in L^2([-\pi, \pi])} decomposes as

\displaystyle  f(x) = \sum_n \widehat f(n) \exp(inx)

for {\widehat f(n)}.

This is a little worse than our finite examples: instead of a finite sum on the right-hand side, we actually have an infinite sum. This is because our set of frequencies is now {\mathbb Z}, which isn’t finite. In this case the {\widehat f} need not be finitely supported, but do satisfy {\sum_n |\widehat f(n)|^2 < \infty}.

Since the frequency set is indexed by {\mathbb Z}, we call this a Fourier series to reflect the fact that the index is {n \in \mathbb Z}.

Exercise 7

Show once again

\displaystyle  \widehat f(0) = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x).

Often we require that the function {f} satisfies {f(-\pi) = f(\pi)}, so that {f} becomes a periodic function, and we can think of it as {f : \mathbb T \rightarrow \mathbb C}.

2.4. Summary

We summarize our various flavors of Fourier analysis in the following table.

\displaystyle  \begin{array}{llll} \hline \text{Type} & \text{Physical var} & \text{Frequency var} & \text{Basis functions} \\ \hline \textbf{Binary} & \{\pm1\}^n & \text{Subsets } S \subseteq \left\{ 1, \dots, n \right\} & \prod_{s \in S} x_s \\ \textbf{Finite group} & Z & \xi \in Z, \text{ choice of } \cdot, & e(\xi \cdot x) \\ \textbf{Fourier series} & \mathbb T \text{ or } [-\pi, \pi] & n \in \mathbb Z & \exp(inx) \\ \end{array}

In fact, we will soon see that all these examples are subsumed by Pontryagin duality for compact groups {G}.

3. Parseval and friends

The notion of an orthonormal basis makes several “big-name” results in Fourier analysis quite lucid. Basically, we can take every result from Proposition~1, translate it into the context of our Fourier analysis, and get a big-name result.

Corollary 8 (Parseval theorem)

Let {f : Z \rightarrow \mathbb C}, where {Z} is a finite abelian group. Then

\displaystyle  \sum_\xi |\widehat f(\xi)|^2 = \frac{1}{|Z|} \sum_{x \in Z} |f(x)|^2.

Similarly, if {f : [-\pi, \pi] \rightarrow \mathbb C} is square-integrable then its Fourier series satisfies

\displaystyle  \sum_n |\widehat f(n)|^2 = \frac{1}{2\pi} \int_{[-\pi, \pi]} |f(x)|^2.

Proof: Recall that {\left< f,f\right>} is equal to the square sum of the coefficients. \Box

Corollary 9 (Formulas for {\widehat f})

Let {f : Z \rightarrow \mathbb C}, where {Z} is a finite abelian group. Then

\displaystyle  \widehat f(\xi) = \frac{1}{|Z|} \sum_{x \in Z} f(x) \overline{e_\xi(x)}.

Similarly, if {f : [-\pi, \pi] \rightarrow \mathbb C} is square-integrable then its Fourier series is given by

\displaystyle  \widehat f(n) = \frac{1}{2\pi} \int_{[-\pi, \pi]} f(x) \exp(-inx).

Proof: Recall that in an orthonormal basis {(e_\xi)_\xi}, the coefficient of {e_\xi} in {f} is {\left< f, e_\xi\right>}. \Box
Note in particular what happens if we select {\xi = 0} in the above!

Corollary 10 (Plancherel theorem)

Let {f : Z \rightarrow \mathbb C}, where {Z} is a finite abelian group. Then

\displaystyle  \left< f,g \right> = \sum_{\xi \in Z} \widehat f(\xi) \overline{\widehat g(\xi)}.

Similarly, if {f : [-\pi, \pi] \rightarrow \mathbb C} is square-integrable then

\displaystyle  \left< f,g \right> = \sum_n \widehat f(\xi) \overline{\widehat g(\xi)}.

Proof: Guess! \Box

4. (Optional) Arrow’s Impossibility Theorem

As an application, we now prove a form of Arrow’s theorem. Consider {n} voters voting among {3} candidates {A}, {B}, {C}. Each voter specifies a tuple {v_i = (x_i, y_i, z_i) \in \{\pm1\}^3} as follows:

  • {x_i = 1} if {A} ranks {A} ahead of {B}, and {x_i = -1} otherwise.
  • {y_i = 1} if {A} ranks {B} ahead of {C}, and {y_i = -1} otherwise.
  • {z_i = 1} if {A} ranks {C} ahead of {A}, and {z_i = -1} otherwise.

Tacitly, we only consider {3! = 6} possibilities for {v_i}: we forbid “paradoxical” votes of the form {x_i = y_i = z_i} by assuming that people’s votes are consistent (meaning the preferences are transitive).

Then, we can consider a voting mechanism

\displaystyle  \begin{aligned} f : \{\pm1\}^n &\rightarrow \{\pm1\} \\ g : \{\pm1\}^n &\rightarrow \{\pm1\} \\ h : \{\pm1\}^n &\rightarrow \{\pm1\} \end{aligned}

such that {f(x_\bullet)} is the global preference of {A} vs. {B}, {g(y_\bullet)} is the global preference of {B} vs. {C}, and {h(z_\bullet)} is the global preference of {C} vs. {A}. We’d like to avoid situations where the global preference {(f(x_\bullet), g(y_\bullet), h(z_\bullet))} is itself paradoxical.

In fact, we will prove the following theorem:

Theorem 11 (Arrow Impossibility Theorem)

Assume that {(f,g,h)} always avoids paradoxical outcomes, and assume {\mathbf E f = \mathbf E g = \mathbf E h = 0}. Then {(f,g,h)} is either a dictatorship or anti-dictatorship: there exists a “dictator” {k} such that

\displaystyle  f(x_\bullet) = \pm x_k, \qquad g(y_\bullet) = \pm y_k, \qquad h(z_\bullet) = \pm z_k

where all three signs coincide.

The “irrelevance of independent alternatives” reflects that The assumption {\mathbf E f = \mathbf E g = \mathbf E h = 0} provides symmetry (and e.g. excludes the possibility that {f}, {g}, {h} are constant functions which ignore voter input). Unlike the usual Arrow theorem, we do not assume that {f(+1, \dots, +1) = +1} (hence possibility of anti-dictatorship).

To this end, we actually prove the following result:

Lemma 12

Assume the {n} voters vote independently at random among the {3! = 6} possibilities. The probability of a paradoxical outcome is exactly

\displaystyle  \frac14 + \frac14 \sum_{S \subseteq \{1, \dots, n\}} \left( -\frac13 \right)^{\left\lvert S \right\rvert} \left( \widehat f(S) \widehat g(S) + \widehat g(S) \widehat h(S) + \widehat h(S) \widehat f(S) \right) .

Proof: Define the Boolean function {D : \{\pm 1\}^3 \rightarrow \mathbb R} by

\displaystyle  D(a,b,c) = ab + bc + ca = \begin{cases} 3 & a,b,c \text{ all equal} \\ -1 & a,b,c \text{ not all equal}. \end{cases}.

Thus paradoxical outcomes arise when {D(f(x_\bullet), g(y_\bullet), h(z_\bullet)) = 3}. Now, we compute that for randomly selected {x_\bullet}, {y_\bullet}, {z_\bullet} that

\displaystyle  \begin{aligned} \mathbf E D(f(x_\bullet), g(y_\bullet), h(z_\bullet)) &= \mathbf E \sum_S \sum_T \left( \widehat f(S) \widehat g(T) + \widehat g(S) \widehat h(T) + \widehat h(S) \widehat f(T) \right) \left( \chi_S(x_\bullet)\chi_T(y_\bullet) \right) \\ &= \sum_S \sum_T \left( \widehat f(S) \widehat g(T) + \widehat g(S) \widehat h(T) + \widehat h(S) \widehat f(T) \right) \mathbf E\left( \chi_S(x_\bullet)\chi_T(y_\bullet) \right). \end{aligned}

Now we observe that:

  • If {S \neq T}, then {\mathbf E \chi_S(x_\bullet) \chi_T(y_\bullet) = 0}, since if say {s \in S}, {s \notin T} then {x_s} affects the parity of the product with 50% either way, and is independent of any other variables in the product.
  • On the other hand, suppose {S = T}. Then

    \displaystyle  \chi_S(x_\bullet) \chi_T(y_\bullet) = \prod_{s \in S} x_sy_s.

    Note that {x_sy_s} is equal to {1} with probability {\frac13} and {-1} with probability {\frac23} (since {(x_s, y_s, z_s)} is uniform from {3!=6} choices, which we can enumerate). From this an inductive calculation on {|S|} gives that

    \displaystyle  \prod_{s \in S} x_sy_s = \begin{cases} +1 & \text{ with probability } \frac{1}{2}(1+(-1/3)^{|S|}) \\ -1 & \text{ with probability } \frac{1}{2}(1-(-1/3)^{|S|}). \end{cases}

    Thus

    \displaystyle  \mathbf E \left( \prod_{s \in S} x_sy_s \right) = \left( -\frac13 \right)^{|S|}.

Piecing this altogether, we now have that

\displaystyle  \mathbf E D(f(x_\bullet), g(y_\bullet), h(z_\bullet)) = \left( \widehat f(S) \widehat g(T) + \widehat g(S) \widehat h(T) + \widehat h(S) \widehat f(T) \right) \left( -\frac13 \right)^{|S|}.

Then, we obtain that

\displaystyle  \begin{aligned} &\mathbf E \frac14 \left( 1 + D(f(x_\bullet), g(y_\bullet), h(z_\bullet)) \right) \\ =& \frac14 + \frac14\sum_S \left( \widehat f(S) \widehat g(T) + \widehat g(S) \widehat h(T) + \widehat h(S) \widehat f(T) \right) \widehat f(S)^2 \left( -\frac13 \right)^{|S|}. \end{aligned}

Comparing this with the definition of {D} gives the desired result. \Box

Now for the proof of the main theorem. We see that

\displaystyle  1 = \sum_{S \subseteq \{1, \dots, n\}} -\left( -\frac13 \right)^{\left\lvert S \right\rvert} \left( \widehat f(S) \widehat g(S) + \widehat g(S) \widehat h(S) + \widehat h(S) \widehat f(S) \right).

But now we can just use weak inequalities. We have {\widehat f(\varnothing) = \mathbf E f = 0} and similarly for {\widehat g} and {\widehat h}, so we restrict attention to {|S| \ge 1}. We then combine the famous inequality {|ab+bc+ca| \le a^2+b^2+c^2} (which is true across all real numbers) to deduce that

\displaystyle  \begin{aligned} 1 &= \sum_{S \subseteq \{1, \dots, n\}} -\left( -\frac13 \right)^{\left\lvert S \right\rvert} \left( \widehat f(S) \widehat g(S) + \widehat g(S) \widehat h(S) + \widehat h(S) \widehat f(S) \right) \\ &\le \sum_{S \subseteq \{1, \dots, n\}} \left( \frac13 \right)^{\left\lvert S \right\rvert} \left( \widehat f(S)^2 + \widehat g(S)^2 + \widehat h(S)^2 \right) \\ &\le \sum_{S \subseteq \{1, \dots, n\}} \left( \frac13 \right)^1 \left( \widehat f(S)^2 + \widehat g(S)^2 + \widehat h(S)^2 \right) \\ &= \frac13 (1+1+1) = 1. \end{aligned}

with the last step by Parseval. So all inequalities must be sharp, and in particular {\widehat f}, {\widehat g}, {\widehat h} are supported on one-element sets, i.e. they are linear in inputs. As {f}, {g}, {h} are {\pm 1} valued, each {f}, {g}, {h} is itself either a dictator or anti-dictator function. Since {(f,g,h)} is always consistent, this implies the final result.

5. Pontryagin duality

In fact all the examples we have covered can be subsumed as special cases of Pontryagin duality, where we replace the domain with a general group {G}. In what follows, we assume {G} is a locally compact abelian (LCA) group, which just means that:

  • {G} is a abelian topological group,
  • the topology on {G} is Hausdorff, and
  • the topology on {G} is locally compact: every point of {G} has a compact neighborhood.

Notice that our previous examples fall into this category:

Example 13 (Examples of locally compact abelian groups)

  • Any finite group {Z} with the discrete topology is LCA.
  • The circle group {\mathbb T} is LCA and also in fact compact.
  • The real numbers {\mathbb R} are an example of an LCA group which is not compact.

5.1. The Pontryagin dual

The key definition is:

Definition 14

Let {G} be an LCA group. Then its Pontryagin dual is the abelian group

\displaystyle  \widehat G \overset{\mathrm{def}}{=} \left\{ \text{continuous group homomorphisms } \xi : G \rightarrow \mathbb T \right\}.

The maps {\xi} are called characters. By equipping it with the compact-open topology, we make {\widehat G} into an LCA group as well.

Example 15 (Examples of Pontryagin duals)

  • {\widehat{\mathbb Z} \cong \mathbb T}.
  • {\widehat{\mathbb T} \cong \mathbb Z}. The characters are given by {\theta \mapsto n\theta} for {n \in \mathbb Z}.
  • {\widehat{\mathbb R} \cong \mathbb R}. This is because a nonzero continuous homomorphism {\mathbb R \rightarrow S^1} is determined by the fiber above {1 \in S^1}. (Covering projections, anyone?)
  • {\widehat{\mathbb Z/n\mathbb Z} \cong \mathbb Z/n\mathbb Z}, characters {\xi} being determined by the image {\xi(1) \in \mathbb T}.
  • {\widehat{G \times H} \cong \widehat G \times \widehat H}.
  • If {Z} is a finite abelian group, then previous two examples (and structure theorem for abelian groups) imply that {\widehat{Z} \cong Z}, though not canonically. You may now recognize that the bilinear form {\cdot : Z \times Z \rightarrow Z} is exactly a choice of isomorphism {Z \rightarrow \widehat Z}.
  • For any group {G}, the dual of {\widehat G} is canonically isomorphic to {G}, id est there is a natural isomorphism

    \displaystyle  G \cong \widehat{\widehat G} \qquad \text{by} \qquad x \mapsto \left( \xi \mapsto \xi(x) \right).

    This is the Pontryagin duality theorem. (It is an analogy to the isomorphism {(V^\vee)^\vee \cong V} for vector spaces {V}.)

5.2. The orthonormal basis in the compact case

Now assume {G} is LCA but also compact, and thus has a unique Haar measure {\mu} such that {\mu(G) = 1}; this lets us integrate over {G}. Let {L^2(G)} be the space of square-integrable functions to {\mathbb C}, i.e.

\displaystyle  L^2(G) = \left\{ f : G \rightarrow \mathbb C \quad\text{such that}\quad \int_G |f|^2 \; d\mu < \infty \right\}.

Thus we can equip it with the inner form

\displaystyle  \left< f,g \right> = \int_G f\overline{g} \; d\mu.

In that case, we get all the results we wanted before:

Theorem 16 (Characters of {\widehat G} forms an orthonormal basis)

Assume {G} is LCA and compact. Then {\widehat G} is discrete, and the characters

\displaystyle  (e_\xi)_{\xi \in \widehat G} \qquad\text{by}\qquad e_\xi(x) = e(\xi(x)) = \exp(2\pi i \xi(x))

form an orthonormal basis of {L^2(G)}. Thus for each {f \in L^2(G)} we have

\displaystyle  f = \sum_{\xi \in \widehat G} \widehat f(\xi) e_\xi

where

\displaystyle  \widehat f(\xi) = \left< f, e_\xi \right> = \int_G f(x) \exp(-2\pi i \xi(x)) \; d\mu.

The sum {\sum_{\xi \in \widehat G}} makes sense since {\widehat G} is discrete. In particular,

  • Letting {G = Z} gives “Fourier transform on finite groups”.
  • The special case {G = \mathbb Z/n\mathbb Z} has its own Wikipedia page.
  • Letting {G = \mathbb T} gives the “Fourier series” earlier.

5.3. The Fourier transform of the non-compact case

If {G} is LCA but not compact, then Theorem~16 becomes false. On the other hand, it is still possible to define a transform, but one needs to be a little more careful. The generic example to keep in mind in what follows is {G = \mathbb R}.

In what follows, we fix a Haar measure {\mu} for {G}. (This {\mu} is no longer unique up to scaling, since {\mu(G) = \infty}.)

One considers this time the space {L^1(G)} of absolutely integrable functions. Then one directly defines the Fourier transform of {f \in L^1(G)} to be

\displaystyle  \widehat f(\xi) = \int_G f \overline{e_\xi} \; d\mu

imitating the previous definitions in the absence of an inner product. This {\widehat f} may not be {L^1}, but it is at least bounded. Then we manage to at least salvage:

Theorem 17 (Fourier inversion on {L^1(G)})

Take an LCA group {G} and fix a Haar measure {\mu} on it. One can select a unique dual measure {\widehat \mu} on {\widehat G} such that if {f \in L^1(G)}, {\widehat f \in L^1(\widehat G)}, the “Fourier inversion formula”

\displaystyle  f(x) = \int_{\widehat G} \widehat f(\xi) e_\xi(x) \; d\widehat\mu.

holds almost everywhere. It holds everywhere if {f} is continuous.

Notice the extra nuance of having to select measures, because it is no longer the case that {G} has a single distinguished measure.

Despite the fact that the {e_\xi} no longer form an orthonormal basis, the transformed function {\widehat f : \widehat G \rightarrow \mathbb C} is still often useful. In particular, they have special names for a few special {G}:

5.4. Summary

In summary,

  • Given any LCA group {G}, we can transform sufficiently nice functions on {G} into functions on {\widehat G}.
  • If {G} is compact, then we have the nicest situation possible: {L^2(G)} is an inner product space with {\left< f,g \right> = \int_G f \overline{g} \; d\mu}, and {e_\xi} form an orthonormal basis across {\widehat \xi \in \widehat G}.
  • If {G} is not compact, then we no longer get an orthonormal basis or even an inner product space, but it is still possible to define the transform

    \displaystyle  \widehat f : \widehat G \rightarrow \mathbb C

    for {f \in L^1(G)}. If {\widehat f} is also in {L^1(G)} we still get a “Fourier inversion formula” expressing {f} in terms of {\widehat f}.

We summarize our various flavors of Fourier analysis for various {G} in the following. In the first half {G} is compact, in the second half {G} is not.

\displaystyle  \begin{array}{llll} \hline \text{Name} & \text{Domain }G & \text{Dual }\widehat G & \text{Characters} \\ \hline \textbf{Binary Fourier analysis} & \{\pm1\}^n & S \subseteq \left\{ 1, \dots, n \right\} & \prod_{s \in S} x_s \\ \textbf{Fourier transform on finite groups} & Z & \xi \in \widehat Z \cong Z & e( i \xi \cdot x) \\ \textbf{Discrete Fourier transform} & \mathbb Z/n\mathbb Z & \xi \in \mathbb Z/n\mathbb Z & e(\xi x / n) \\ \textbf{Fourier series} & \mathbb T \cong [-\pi, \pi] & n \in \mathbb Z & \exp(inx) \\ \hline \textbf{Continuous Fourier transform} & \mathbb R & \xi \in \mathbb R & e(\xi x) \\ \textbf{Discrete time Fourier transform} & \mathbb Z & \xi \in \mathbb T \cong [-\pi, \pi] & \exp(i \xi n) \\ \end{array}

You might notice that the various names are awful. This is part of the reason I got confused as a high school student: every type of Fourier series above has its own Wikipedia article. If it were up to me, we would just use the term “{G}-Fourier transform”, and that would make everyone’s lives a lot easier.

6. Peter-Weyl

In fact, if {G} is a Lie group, even if {G} is not abelian we can still give an orthonormal basis of {L^2(G)} (the square-integrable functions on {G}). It turns out in this case the characters are attached to complex irreducible representations of {G} (and in what follows all representations are complex).

The result is given by the Peter-Weyl theorem. First, we need the following result:

Lemma 18 (Compact Lie groups have unitary reps)

Any finite-dimensional (complex) representation {V} of a compact Lie group {G} is unitary, meaning it can be equipped with a {G}-invariant inner form. Consequently, {V} is completely reducible: it splits into the direct sum of irreducible representations of {G}.

Proof: Suppose {B : V \times V \rightarrow \mathbb C} is any inner product. Equip {G} with a right-invariant Haar measure {dg}. Then we can equip it with an “averaged” inner form

\displaystyle  \widetilde B(v,w) = \int_G B(gv, gw) \; dg.

Then {\widetilde B} is the desired {G}-invariant inner form. Now, the fact that {V} is completely reducible follows from the fact that given a subrepresentation of {V}, its orthogonal complement is also a subrepresentation. \Box

The Peter-Weyl theorem then asserts that the finite-dimensional irreducible unitary representations essentially give an orthonormal basis for {L^2(G)}, in the following sense. Let {V = (V, \rho)} be such a representation of {G}, and fix an orthonormal basis of {e_1}, \dots, {e_d} for {V} (where {d = \dim V}). The {(i,j)}th matrix coefficient for {V} is then given by

\displaystyle  G \xrightarrow{\rho} \mathop{\mathrm{GL}}(V) \xrightarrow{\pi_{ij}} \mathbb C

where {\pi_{ij}} is the projection onto the {(i,j)}th entry of the matrix. We abbreviate {\pi_{ij} \circ \rho} to {\rho_{ij}}. Then the theorem is:

Theorem 19 (Peter-Weyl)

Let {G} be a compact Lie group. Let {\Sigma} denote the (pairwise non-isomorphic) irreducible finite-dimensional unitary representations of {G}. Then

\displaystyle  \left\{ \sqrt{\dim V} \rho_{ij} \; \Big\vert \; (V, \rho) \in \Sigma, \text{ and } 1 \le i,j \le \dim V \right\}

is an orthonormal basis of {L^2(G)}.

Strictly, I should say {\Sigma} is a set of representatives of the isomorphism classes of irreducible unitary representations, one for each isomorphism class.

In the special case {G} is abelian, all irreducible representations are one-dimensional. A one-dimensional representation of {G} is a map {G \hookrightarrow \mathop{\mathrm{GL}}(\mathbb C) \cong \mathbb C^\times}, but the unitary condition implies it is actually a map {G \hookrightarrow S^1 \cong \mathbb T}, i.e. it is an element of {\widehat G}.

Advertisements

Tannakian Reconstruction

These notes are from the February 23, 2016 lecture of 18.757, Representations of Lie Algebras, taught by Laura Rider.

Fix a field {k} and let {G} be a finite group. In this post we will show that one can reconstruct the group {G} from the monoidal category of {k[G]}-modules (i.e. its {G}-representations).

1. Hopf algebras

We won’t do anything with Hopf algebras per se, but it will be convenient to have the language.

Recall that an associative {k}-algebra is a {k}-vector space {A} equipped with a map {m : A \otimes A \rightarrow A} and {i : k \hookrightarrow A} (unit), satisfying some certain axioms.

Then a {k}-coalgebra is a map

\displaystyle \Delta : A \rightarrow A \otimes A \qquad \varepsilon : A \rightarrow k

called comultiplication and counit respectively, which satisfy the dual axioms. See \url{https://en.wikipedia.org/wiki/Coalgebra}.

Now a Hopf algebra {A} is a bialgebra {A} over {k} plus a so-called antipode {S : A \rightarrow A}. We require that the diagram

rep-tan-1

commutes.

Given a Hopf algebra {A} group-like element in {A} is an element of

\displaystyle G = \left\{ x \in A \mid \Delta(x) = x \otimes x \right\}.

Exercise 1

Show that {G} is a group with multiplication {m} and inversion {S}.

Now the example

Example 2 (Group algebra is Hopf algebra)

The group algebra {k[G]} is a Hopf algebra with

  • {m}, {i} as expected.
  • {\varepsilon} the counit is the trivial representation.
  • {\Delta} comes form {g \mapsto g \otimes g} extended linearly.
  • {S} takes {g \mapsto g^{-1}} extended linearly.

 

Theorem 3

The group-like elements are precisely the basis elements {1_k \cdot g \in k[g]}.

 

Proof: Assume {V = \sum_{g \in G} a_g g} is grouplike. Then by assumption we should have

\displaystyle \sum_{g \in G} a_g (g \otimes g) = \Delta(v) = \sum_{g \in G} \sum_{h \in G} a_ga_h (g \otimes h).

Comparing each coefficient, we get that

\displaystyle a_ga_h = \begin{cases} a_g & g = h \\ 0 & \text{otherwise}. \end{cases}

This can only occur if some {a_g} is {1} and the remaining coefficients are all zero. \Box

 

2. Monoidal functors

Recall that monoidal category (or tensor category) is a category {\mathscr C} equipped with a functor {\otimes : \mathscr C \times \mathscr C \rightarrow \mathscr C} which has an identity {I} and satisfies some certain coherence conditions. For example, for any {A,B,C \in \mathscr C} we should have a natural isomorphism

\displaystyle A \otimes (B \otimes C) \xrightarrow{a_{A,B,C}} (A \otimes B) \otimes C.

The generic example is of course suggested by the notation: vector spaces over {k}, abelian groups, or more generally modules/algebras over a ring {R}.

Now take two monoidal categories {(\mathscr C, \otimes_\mathscr C)} and {(\mathscr D, \otimes_\mathscr D)}. Then a monoidal functor {F : \mathscr C \rightarrow \mathscr D} is a functor for which we additionally need to select an isomorphism

\displaystyle F(A \otimes B) \xrightarrow{t_{A,B}} F(A) \otimes F(B).

We then require that the diagram

rep-tan-2

commutes, plus some additional compatibility conditions with the identities of the {\otimes}‘s (see Wikipedia for the list).

We also have a notion of a natural transformation of two functors {t : F \rightarrow G}; this is just making the squares

rep-tan-3

commute. Now, suppose {F : \mathscr C \rightarrow \mathscr C} is a monoidal functor. Then an automorphism of {F} is a natural transformation {t : F \rightarrow F} which is invertible, i.e. a natural isomorphism.

3. Application to {k[G]}

With this language, we now reach the main point of the post. Consider the category of {k[G]} modules endowed with the monoidal {\otimes} (which is just the tensor over {k}, with the usual group representation). We want to reconstruct {G} from this category.

Let {U} be the forgetful functor

\displaystyle U : \mathsf{Mod}_{k[G]} \rightarrow \mathsf{Vect}_k.

It’s easy to see this is in fact an monoidal functor. Now let {\text{Aut }^{\otimes}(U)} be the set of monoidal automorphisms of {U}.

The key claim is the following:

Theorem 4 ({G} is isomorphic to {\text{Aut }^\otimes(U)})

Consider the map

\displaystyle i : G \rightarrow \text{Aut }^\otimes(U) \quad\text{by}\quad g \mapsto T^g.

Here, the natural transformation {T^g} is defined by the components

\displaystyle T^g_{(V,\phi)} : (V, \phi) \rightarrow U(V, \phi) = V \quad\text{by}\quad v \mapsto \phi(g) v.

Then {i} is an isomorphism of groups.

In particular, using only {\otimes} structure this exhibits an isomorphism {G \cong \text{Aut }^\otimes(U)}. Consequently this solves the problem proposed at the beginning of the lecture.

Proof: It’s easy to see {i} is a group homomorphism.

To see it’s injective, we show {1_G \neq g \in G} gives {T^g} isn’t the identity automorphism. i.e. we need to find some representation for which {g} acts nontrivially on {V}. Now just take the regular representation, which is faithful!

The hard part is showing that it’s surjective. For this we want to reduce it to the regular representation.

Lemma 5

Any {T \in \text{Aut }^\otimes(U)} is completely determined by {T_{k[G]}(1_{k[G]}) \in k[G]}.

 

Proof: Let {(V, \phi)} be a representation of {G}. Then for all {v \in V}, we have a unique morphism of representations

\displaystyle f_v : k[G] \rightarrow (V, \phi) \quad\text{by}\quad 1_{k[G]} \mapsto v.

If we apply the forgetful functor to this, we have a diagram

rep-tan-4

\Box
Next, we claim

Lemma 6

{T_{k[G]}(1_{k[G]})} is a grouplike element of {k[G]}.

 

Proof: Draw the diagram

rep-tan-5and note that it implies

\displaystyle \Delta(T_{k[G]}(1_{k[G]})) = T_{k[G]}(1_{k[G]}) \otimes T_{k[G]}(1_{k[G]}).

\Box
This implies surjectivity, by our earlier observation that grouplike elements in {k[G]} are exactly the elements of {G}. \Box

 

Representation Theory, Part 4: The Finite Regular Representation

Good luck to everyone taking the January TST for the IMO 2015 tomorrow!

Now that we have products of irreducibles under our belt, I’ll talk about the finite regular representation and use it to derive the following two results about irreducibles.

  1. The number of (isomorphsim classes) of irreducibles {\rho_\alpha} is equal to the number of conjugacy classes of {G}.
  2. We have { \left\lvert G \right\rvert = \sum_\alpha \left( \dim \rho_\alpha \right)^2 }.

These will actually follow as corollaries from the complete decomposition of the finite regular representation.

In what follows {k} is an algebraically closed field, {G} is a finite group, and the characteristic of {k} does not divide {\left\lvert G \right\rvert}. As a reminder, here are the representations we’ve already seen in the order we met them, plus two new ones we’ll introduce properly below.

\displaystyle  	\begin{array}{|l|lll|} 		\hline 		\text{Representation} & \text{Group} & \text{Space} & \text{Action} \\ \hline 		\rho & V & G & G \rightarrow g \cdot_\rho V \\ 		\text{Fun}(X) & G & \text{Fun}(X) & (g \cdot f)(x) = f(g^{-1} \cdot x) \\ 		\text{triv}_G & G & k & g \cdot a = a \\ 		\rho_1 \oplus \rho_2 & G & V_1 \oplus V_2 & g \cdot (v_1 + v_2) = (g \cdot_{\rho_1} v_1) + (g \cdot_{\rho_2} v_2) \\ 		\rho_1 \boxtimes \rho_2 & G_1 \times G_2 & V_1 \otimes V_2 & (g_1, g_2) \cdot (v_1 \otimes v_2) \\ 		&&& = (g_1 \cdot_{\rho_1} v_1) \otimes (g_2 \cdot_{\rho_2} v_2) \\ 		\text{Res}^G_H(\rho) & H & V & h \cdot v = h \cdot_\rho v\\ 		\rho_1 \otimes \rho_2 & G & V_1 \otimes V_2 & g \cdot (v_1 \otimes v_2) = (g \cdot_{\rho_1} v_1) \otimes (g \cdot_{\rho_2} v_2) \\ 		\text{Reg}(G) & G \times G & \text{Fun}(G) & (g_1, g_2) \cdot f(g) = f(g_2 g g_1^{-1}) \\ 		\rho^\vee & V^\vee & G & (g \cdot \xi)(v) = \xi(g^{-1} \cdot_\rho v) \\ 		\hline 	\end{array}

1. The Regular Representation

Recall that {\mathrm{Fun}(G)} is the vector space of functions from {G} to {k}, with addition being defined canonically. It has a basis of functions {\delta_g} for each {g \in G}, where

\displaystyle  	\delta_g(x) 	= 	\begin{cases} 		1 & x = g \\ 		0 & \text{otherwise} 	\end{cases}

for every {x \in G}. (Throughout this post, I’ll be trying to use {x} to denote inputs to a function from {G} to {k}.)

Definition Let {G} be a finite group. Then the finite regular representation, {\mathrm{Reg}(G)} is a representation on {G \times G} defined on the vector space {\mathrm{Fun}(G)}, with the following action for each {f \in \mathrm{Fun}(G)} and {(g_1, g_2) \in G \times G}:

\displaystyle  ( g_1, g_2 ) \cdot f(x) \overset{\text{def}}{=} f(g_2 x g_1^{-1}).

Note that this is a representation of the product {G \times G}, not {G}! (As an aside, you can also define this representation for infinite groups {G} by replacing {\mathrm{Fun}(G)} with {\mathrm{Fun}_c(G)}, the functions which are nonzero at only finitely many {g \in G}.)

In any case, we now can make {\mathrm{Reg}(G)} into a representation of {G} by this restriction, giving {\mathrm{Res}_G^{G \times G} \left( \mathrm{Reg}(G) \right)}, which I will abbreviate as just {\mathrm{Reg}^\ast(G)} through out this post (this is not a standard notation). The action for this is

\displaystyle  	(g \cdot_{\mathrm{Reg}^\ast(G)} f)(x) 	\overset{\text{def}}{=} 	\left( (g, g) \cdot_{\mathrm{Reg}(G)} f \right)(x) 	= f\left( g^{-1} x g \right).

Exercise Consider the invariant subspace of {\mathrm{Reg}^\ast(G)}, which is

\displaystyle  \left( \mathrm{Reg}^\ast(G) \right)^G 		= 		\left\{ f : G \rightarrow V \mid f(g^{-1} x g) = f(x) \; \forall x,g \in G \right\}.

Prove that the dimension of this space is equal to the number of conjugacy classes of {G}. (Look at the {\delta_g} basis.)

Recall that in general, the invariant subspace {\rho^G} is defined as

\displaystyle  \rho^G \overset{\text{def}}{=} \left\{ v \in V \mid g \cdot_\rho v = v \; \forall g \in G \right\}.

2. Dual Representations

Before I can state the main theorem of this post, I need to define the dual representation.

Recall that given a vector space {V}, we define the \textbf} by

\displaystyle  V^\vee \overset{\text{def}}{=} \mathrm{Hom}(V,k)

i.e. it is the set of maps from {V} to {k}. If {V} is finite-dimensional, we can think of this as follows: if {V} consists of the column vectors of length {m}, then {V^\vee} is the row vectors of length {m}, which can be multiplied onto elements of {V}. (This analogy breaks down for {V} infinite dimensional.) Recall that if {V} is finite-dimensional then there is a canonical isomorphism {V \simeq (V^\vee)^\vee} by the map {v \mapsto \mathrm{ev}_v}, where {\mathrm{ev}_v : V^\vee \rightarrow k} sends {\xi \mapsto \xi(v)}.

Now we can define the dual representation in a similar way.

Definition Let {\rho = (V, \cdot_\rho)} be a {G}-representation. Then we define the dual representation {\rho^\vee} by

\displaystyle  \rho^\vee = \left( V^\vee, \cdot_{\rho^\vee} \right) 		\quad\text{where}\quad 		\left( g \cdot_{\rho^\vee} \xi \right)(v) 		= \xi \left( g^{-1} \cdot_\rho v \right).

Lemma 1 If {\rho} is finite-dimensional then {(\rho^\vee)^\vee \simeq \rho} by the same isomorphism.

Proof: We want to check that the isomorphism {V = (V^\vee)^\vee} by {v \mapsto \mathrm{ev}_v} respects the action of {G}. That’s equivalent to checking

\displaystyle  \mathrm{ev}_{g \cdot_\rho v} = g \cdot_{(\rho^\vee)^\vee} \mathrm{ev}_v.

But

\displaystyle  		\mathrm{ev}_{g \cdot v}(\xi) 		= \xi(g \cdot_\rho v)

and

\displaystyle  		\left( g \cdot_{(\rho^\vee)^\vee} \mathrm{ev}_v \right)(\xi) 		= 		\mathrm{ev}_v(g^{-1} \cdot_{\rho^\vee} \xi) 		= \left( g^{-1} \cdot_{\rho^\vee} \xi \right)(v) 		= \xi(g \cdot_\rho v).

So the functions are indeed equal. \Box

Along with that lemma, we also have the following property.

Lemma 2 For any finite-dimensional {\rho_1}, {\rho_2} we have {\mathrm{Hom}_G(\rho_1, \rho_2) \simeq \mathrm{Hom}_G(\rho_1 \otimes \rho_2^\vee, \text{triv}_G)}.

Proof: Let {\rho_1 = (V_1, \cdot_{\rho_1})} and {\rho_2 = (V_2, \cdot_{\rho_2})}. We already know that we have an isomorphism of vector homomorphisms

\displaystyle  		\mathrm{Hom}_{\textbf{Vect}}(V_1, V_2) 		\simeq \mathrm{Hom}_{\textbf{Vect}} (V_1 \otimes V_2^\vee, k)

by sending each {T \in \mathrm{Hom}_{\textbf{Vect}}(V_1, V_2)} to the map {T' \in \mathrm{Hom}_{\textbf{Vect}} (V_1 \otimes V_2^\vee, k)} which has {T'(v \otimes \xi) = \xi(T(v))}. So the point is to check that {T} respects the {G}-action if and only if {T'} does. This is just a computation. \Box

You can deduce as a corollary the following.

Exercise Use the lemma to show {\mathrm{Hom}_G(\rho, \text{triv}_G) \simeq \mathrm{Hom}_G(\text{triv}_G, \rho^\vee)}.

Finally, we want to talk about when {\rho^\vee} being irreducible. The main result is the following.

Lemma 3 Consider a representation {\rho}, not necessarily finite-dimensional. If {\rho^\vee} is irreducible then so is {\rho}.

When {\rho} is finite dimensional we have {(\rho^\vee)^\vee \simeq \rho}, and so it is true for finite-dimensional irreducible {\rho} that {\rho^\vee} is also irreducible. Interestingly, this result fails for infinite-dimensional spaces as this math.SE thread shows.

Proof: Let {\rho = (V, \cdot_\rho)}. Let {W} be a {\rho}-invariant subspace of {V}. Then consider

\displaystyle  W^\perp = \left\{ \xi \in V^\vee : \xi(w) = 0 \right\}.

This is a {\rho^\vee}-invariant subspace of {V^\vee}, so since {\rho^\vee} is irreducible, either {W^\perp = V^\vee} or {W^\perp = \{0\}}. You can check that these imply {W=0} and {W=V}, respectively. \Box

3. Main Result

Now that we know about the product of representations and dual modules, we can state the main result of this post: the complete decomposition of {\mathrm{Reg}(G)}.

Theorem 4 We have an isomorphism

\displaystyle  		\mathrm{Reg}(G) \simeq 		\bigoplus_{\alpha} \rho_\alpha \boxtimes \rho_\alpha^\vee.

Before we can begin the proof of the theorem we need one more lemma.

Lemma 5 Let {\pi} be a representation of {G \times G}. Then there is an isomorphism

\displaystyle  \mathrm{Hom}_{G \times G}(\pi, \mathrm{Reg}(G)) 		\simeq \mathrm{Hom}_G(\mathrm{Res}^{G \times G}_G(\pi), \text{triv}_G).

Proof: Let {\pi = (V, \cdot_\pi)}. Given a map {T : V \rightarrow \mathrm{Fun}(G)} which respects the {G \times G} action, we send it to the map {\xi_T : V \rightarrow k} with {\xi_T(v) = T(v)(1)}. Conversely, given a map {\xi : V \rightarrow k} which respects the {G} action, we send it to the map {T_\xi : V \rightarrow \mathrm{Fun}(G)} so that {T_\xi(v)(x) = \xi\left( (x,x^{-1}) \cdot v \right)}.

Some very boring calculations show that the two maps are mutually inverse and respect the action. We’ll just do one of them here: let us show that {\xi_T(v)} respects the {G} action given that {T} respects the {G \times G} action. We want to prove

\displaystyle  \xi_T\left( (g,g) \cdot_\pi v \right) 	= g \cdot_\text{triv} \xi_T(v) = \xi_T(v).

Using the definition of {\xi_T}

\displaystyle  		\begin{aligned} 		\xi_T\left( (g,g) \cdot_\pi (v) \right) 		&= T\left( (g,g) \cdot_\pi v \right)(1) \\ 		&= \left( (g,g) \cdot_{\mathrm{Fun}(G)} T(v) \right)(1) \\ 		&= T(v)\left( g 1 g^{-1} \right) = T(v)(1) = \xi_T(v). 		\end{aligned}

The remaining computations are left to a very diligent reader. \Box

Now let’s prove the main theorem!

Proof: We have that {\mathrm{Reg}(G)} is the sum of finite-dimensional irreducibles {\rho_\alpha \boxtimes \rho_\beta}, meaning

\displaystyle  		\mathrm{Reg}(G) = 		\bigoplus_{\alpha, \beta} 		\left( \rho_\alpha \boxtimes \rho_\beta \right) 		\otimes 		\mathrm{Hom}_{G \times G}\left( \rho_\alpha \boxtimes \rho_\beta, \mathrm{Reg}(G) \right).

But using our lemmas, we have that

\displaystyle  		\mathrm{Hom}_{G \times G}\left( \rho_\alpha \boxtimes \rho_\beta, \mathrm{Reg}(G) \right) 		\simeq 		\mathrm{Hom}_G(\rho_\alpha \otimes \rho_\beta, \text{triv}_G) 		\simeq \mathrm{Hom}_G(\rho_\alpha, \rho_\beta^\vee).

We know that {\rho_\beta^\vee} is also irreducible, since {\rho_\beta} is (and we’re in a finite-dimensional situation). So

\displaystyle  		\mathrm{Hom}_G\left( \rho_\alpha, \rho_\beta^\vee \right) 		\simeq 		\begin{cases} 			k & \rho_\beta^\vee = \rho_\alpha \\ 			\{0\} & \text{otherwise}. 		\end{cases}

Thus we deduce

\displaystyle  \mathrm{Reg}(G) 		\simeq \bigoplus_{\alpha} 		\left( \rho_\alpha \boxtimes \rho_\alpha^\vee \right) 		\otimes k 		\simeq \bigoplus_{\alpha} 		\left( \rho_\alpha \boxtimes \rho_\alpha^\vee \right)

and we’re done. \Box

4. Corollaries

Recall that {\mathrm{Fun}(G)}, the space underlying {\mathrm{Reg}(G)}, has a basis with size {\left\lvert G \right\rvert}. Hence by comparing the dimensions of the isomorphsims, we obtain the following corollary.

Theorem 6 We have {\left\lvert G \right\rvert = \sum_\alpha \left( \dim \rho_\alpha \right)^2}.

Moreover, by restriction to {G} we can obtain the corollary

\displaystyle  	\mathrm{Reg}^\ast(G) 	\simeq \bigoplus_\alpha \mathrm{Res}_{G}^{G \times G} \left( \rho_\alpha \otimes \rho_\alpha^\vee \right) 	= \bigoplus_\alpha \rho_\alpha \otimes \rho_\alpha^\vee.

Now let us look at the {G}-invariant spaces in this decomposition. We claim that

\displaystyle  \left( \rho_\alpha \otimes \rho_\alpha^\vee \right)^G \simeq k.

Indeed, {Proposition 1} in {Part 1} tells us that we have a bijection of vector spaces

\displaystyle  \left( \rho_\alpha \otimes \rho_\alpha^\vee \right)^G \simeq 	\mathrm{Hom}_G(\text{triv}_G, \rho_\alpha \otimes \rho_\alpha^\vee).

Then we can write

\displaystyle  \begin{aligned} 	\mathrm{Hom}_G(\text{triv}_G, \rho_\alpha \otimes \rho_\alpha^\vee) 	&\simeq 	\mathrm{Hom}_G\left(\text{triv}_G, \left( \rho_\alpha^\vee \otimes \rho_\alpha \right)^\vee \right) \\ 	&\simeq \mathrm{Hom}_G\left(\rho_\alpha^\vee \otimes \rho_\alpha, \text{triv}_G \right) \\ 	&\simeq \mathrm{Hom}_G\left(\rho_\alpha \otimes \rho_\alpha^\vee, \text{triv}_G \right) \\ 	&\simeq \mathrm{Hom}_G\left(\rho_\alpha, \rho_\alpha \right) \\ 	&\simeq k \end{aligned}

by the lemma, where we have also used Schur’s Lemma at the last step. So that means the dimension of the invariant space {(\mathrm{Reg}^\ast (G))^G} is just the number of irreducibles.

But we already showed that the invariant space of {(\mathrm{Reg}^\ast (G))^G} has dimension equal to the conjugacy classes of {G}. Thus we conclude the second result.

Theorem 7 The number of conjugacy classes of {G} equals the number of irreducible representations of {G}.

Hooray!

Time permitting I might talk about the irreducibles of {S_n} in subsequent posts. No promises here though.

Thanks to Dennis Gaitsgory, who taught me this in his course Math 55a. My notes for Math 55a can be found at my website.

Represenation Theory, Part 3: Products of Representations

Happy New Year to all! A quick reminder that {2015 = 5 \cdot 13 \cdot 31}.

This post will set the stage by examining products of two representations. In particular, I’ll characterize all the irreducibles of {G_1 \times G_2} in terms of those for {G_1} and {G_2}. This will set the stage for our discussion of the finite regular representation in Part 4.

In what follows {k} is an algebraically closed field, {G} is a finite group, and the characteristic of {k} does not divide {\left\lvert G \right\rvert}.

1. Products of Representations

First, I need to tell you how to take the product of two representations.

Definition Let {G_1} and {G_2} be groups. Given a {G_1} representation {\rho_1 = (V_1, \cdot_{\rho_1})} and a {G_2} representation {\rho_2 = (V_2, \cdot_{\rho_2})}, we define

\displaystyle  \rho_1 \boxtimes \rho_2 \overset{\text{def}}{=} 	\left( V_1 \otimes V_2, \cdot \right)

as a representation of {G_1 \times G_2} on {V_1 \otimes V_2}. The action is given by

\displaystyle  (g_1, g_2) \cdot (v_1 \otimes v_2) 	= \left( g_1 \cdot_{\rho_1} v_1 \right) \otimes (g_2 \cdot_{\rho_2} v_2).

In the special case {G_1 = G_2 = G}, we can also restrict {\rho_1 \boxtimes \rho_2} to a representation of {G}. Note that we can interpret {G} itself as a subgroup of {G \times G} by just looking along the diagonal: there’s an obvious isomorphism

\displaystyle  G \sim \left\{ (g,g) \mid g \in G \right\}.

So, let me set up the general definition.

Definition Let {\mathcal G} be a group, and let {\mathcal H} be a subgroup of {\mathcal G}. Then for any representation {\rho = (V, \cdot_\rho)} of {\mathcal G}, we let

\displaystyle  \mathrm{Res}^{\mathcal G}_{\mathcal H} (\rho)

denote the representation of {\mathcal H} on {V} by the same action.

This notation might look intimidating, but it’s not really saying anything, and I include the notation just to be pedantic. All we’re doing is taking a representation and restricting which elements of the group are acting on it.

We now apply this to get {\rho_1 \otimes \rho_2} out of {\rho_1 \boxtimes \rho_2}.

Definition Let {\rho_1 = (V_1, \cdot_{\rho_1})} and {\rho_2 = (V_2, \cdot_{\rho_2})} be representations of {G}. Then we define

\displaystyle  \rho_1 \otimes \rho_2 		\overset{\text{def}}{=} 		\mathrm{Res}^{G \times G}_G \left( \rho_1 \boxtimes \rho_2 \right)

meaning {\rho_1 \otimes \rho_2} has vector space {V_1 \otimes V_2} and action {g \cdot (v_1 \otimes v_2) = (g \cdot_{\rho_1} v_1) \otimes (g \cdot_{\rho_2} v_2)}.

This tensor product obeys some nice properties, for example the following.

Lemma 1 Given representations {\rho}, {\rho_1}, {\rho_2} we have

\displaystyle  		\rho \otimes \left( \rho_1 \oplus \rho_2 \right) 		\simeq 		\left( \rho \otimes \rho_1 \right) \oplus \left( \rho \otimes \rho_2 \right).

Proof: There’s an obvious isomorphism between the underlying vector spaces, and that isomorphism respects the action of {G}. \Box

To summarize all the above, here is a table of the representations we’ve seen, in the order we met them.

\displaystyle  	\begin{array}{|l|lll|} 		\hline 		\text{Representation} & \text{Group} & \text{Space} & \text{Action} \\ \hline 		\rho & V & G & g \cdot_\rho v \\ 		\text{Fun}(X) & G & \text{Fun}(X) & (g \cdot f)(x) = f(g^{-1} \cdot x) \\ 		\text{triv}_G & G & k & g \cdot a = a \\ 		\rho_1 \oplus \rho_2 & G & V_1 \oplus V_2 & g \cdot (v_1 + v_2) = (g \cdot_{\rho_1} v_1) + (g \cdot_{\rho_2} v_2) \\ 		\rho_1 \boxtimes \rho_2 & G_1 \times G_2 & V_1 \otimes V_2 & (g_1, g_2) \cdot (v_1 \otimes v_2) \\ 		&&& = (g_1 \cdot_{\rho_1} v_1) \otimes (g_2 \cdot_{\rho_2} v_2) \\ 		\text{Res}^G_H(\rho) & H & V & h \cdot v = h \cdot_\rho v\\ 		\rho_1 \otimes \rho_2 & G & V_1 \otimes V_2 & g \cdot (v_1 \otimes v_2) = (g \cdot_{\rho_1} v_1) \otimes (g \cdot_{\rho_2} v_2) \\ 		\hline 	\end{array}

2. Revisiting Schur and Maschke

Defining a tensor product of representations gives us another way to express {\rho^{\oplus n}}, as follows. By an abuse of notation, given a vector space {k^m} we can define an associated {G}-representation {k^m = (k^m, \cdot_{k^m})} on it by the trivial action, i.e. {g \cdot_{k^m} v = v} for {v \in k^m}. A special case of this is using {k} to represent {\text{triv}_G}. With this abuse of notation, we have the following lemma.

Lemma 2 Let {M} be an {m}-dimensional vector space over {k}. Then {\rho^{\oplus m} \simeq \rho \otimes M}.

Proof: It reduces to checking that {\rho \otimes k \overset{\text{def}}{=} \rho \otimes \text{triv}_G} is isomorphic to {\rho}, which is evident. We can then proceed by induction: {\rho \otimes (k \oplus k^{t-1}) 	\simeq (\rho \otimes k) \oplus (\rho \otimes k^{t-1})}. \Box

So, we can actually rewrite Maschke’s and Schur’s Theorem as one. Instead of

\displaystyle  \rho \simeq \bigoplus_\alpha \rho_\alpha^{\oplus n_\alpha} \quad\text{where}\quad n_\alpha = \dim \mathrm{Hom}_G(\rho, \rho_\alpha)

we now have instead

\displaystyle  	\bigoplus_\alpha \rho_\alpha \otimes \mathrm{Hom}_G(\rho, \rho_\alpha) 	\simeq \rho.

Now we’re going to explicitly write down the isomorphism between these maps. It suffices to write down the isomorphism {\rho_\alpha \otimes \mathrm{Hom}_G(\rho, \rho_\alpha) \rightarrow \rho_\alpha^{\oplus n_\alpha}}, and then take the sum over each of the {\alpha}‘s. But

\displaystyle  \mathrm{Hom}_G(\rho, \rho_\alpha) \simeq \mathrm{Hom}_G(\rho_\alpha^{\oplus n_\alpha}, \rho_\alpha) \simeq \mathrm{Hom}_G(\rho_\alpha, \rho_\alpha)^{\oplus n_\alpha}.

So to write the isomorphism {\rho_\alpha \otimes \mathrm{Hom}_G(\rho_\alpha, \rho_\alpha)^{\oplus n_\alpha} \rightarrow \rho_\alpha^{\oplus n_\alpha}}, we just have to write down the isomorphism {\rho_\alpha \otimes \mathrm{Hom}_G(\rho_\alpha, \rho_\alpha) \rightarrow \rho_\alpha},

Schur’s Lemma tells us that {\mathrm{Hom}_G(\rho_\alpha, \rho_\alpha) \simeq k}; i.e. every {\xi \in \mathrm{Hom}_G(\rho_\alpha, \rho_\alpha)} just corresponds to multiplying {v} by some constant. So this case is easy: the map

\displaystyle  v \otimes \xi \mapsto \xi(v)

works nicely. And since all we’ve done is break over a bunch of direct sums, the isomorphism propagates all the way up, resulting in the following theorem.

Theorem 3 (Maschke and Schur) For any finite-dimensional {\rho}, the homomorphism of {G} representations

\displaystyle  	\bigoplus_\alpha \rho_\alpha \otimes \mathrm{Hom}_G(\rho, \rho_\alpha) 	\rightarrow \rho

given by sending every simple tensor via

\displaystyle  v \otimes \xi \mapsto \xi(v)

is an isomorphism.

Note that it’s much easier to write the map from left to right than vice-versa, even though the inverse map does exist (since it’s an isomorphism). (Tip: as a general rule of thumb, always map out of the direct sum.)

3. Characterizing the {G_1 \times G_2} irreducibles

Now we are in a position to state the main theorem for this post, which shows that the irreducibles we defined above are very well behaved.

Theorem 4 Let {G_1} and {G_2} be finite groups. Then a finite-dimensional representation {\rho} of {G_1 \times G_2} is irreducible if and only if it is of the form

\displaystyle  \rho_1 \boxtimes \rho_2

where {\rho_1} and {\rho_2} are irreducible representations of {G_1} and {G_2}, respectively.

Proof: First, suppose {\rho = (V, \cdot_\rho)} is an irreducible representation of {G_1 \times G_2}. Set

\displaystyle  \rho^1 \overset{\text{def}}{=} \mathrm{Res}^{G_1 \times G_2}_{G_1} (\rho).

Then by Maschke’s Theorem, we may write {\rho^1} as a direct sum of the irreducibles

\displaystyle  	\bigoplus_\alpha \rho_\alpha^1 \otimes \mathrm{Hom}_{G_1} (\rho_\alpha^1, \rho^1) \simeq \rho^1

with the map {v \otimes \xi \mapsto \xi(v)} being the isomorphism. Now we can put a {G_2} representation structure on {\mathrm{Hom}_{G_1} (\rho_\alpha^1, \rho^1)} by

\displaystyle  	(g_2 \cdot f)(g) 	= g_2 \cdot_{\rho} (f(g)).

It is easy to check that this is indeed a {G_2} representation. Thus it makes sense to talk about the {G_1 \times G_2} representation

\displaystyle  	\bigoplus_\alpha \rho_\alpha^1 \boxtimes \mathrm{Hom}_{G_1} (\rho_\alpha^1, \rho^1).

We claim that the isomorphism for {\rho^1} as a {G_1} representation now lifts to an isomorphism of {G_1 \times G_2} representations. That is, we claim that

\displaystyle  	\bigoplus_\alpha \rho_\alpha^1 \boxtimes \mathrm{Hom}_{G_1} (\rho_\alpha^1, \rho^1) \simeq \rho

by the same isomorphism as for {\rho^1}. To see this, we only have to check that the isomorphism {v \otimes \xi \mapsto \xi(v)} commutes with the action of {g_2 \in G_2}. But this is obvious, since {g_2 \cdot (v \otimes \xi) = v \otimes (g_2 \cdot \xi) \mapsto (g_2 \cdot \xi)(v)}.

Thus the isomorphism holds. But {\rho} is irreducible, so there can only be one nontrivial summand. Thus we derive the required decomposition of {\rho}.

Now for the other direction: take {\rho_1} and {\rho_2} irreducible. Suppose {\rho_1 \boxtimes \rho_2} has a nontrivial subrepresentation of the form {\rho_1' \boxtimes \rho_2'}. Viewing as {G_1} representation, we find that {\rho_1'} is a nontrivial subrepresentation of {\rho_1}, and similarly for {\rho_2}. But {\rho_1} is irreducible, hence {\rho_1' \simeq \rho_1}. Similarly {\rho_2' \simeq \rho_2}. So in fact {\rho_1' \boxtimes \rho_2' \simeq \rho_1 \boxtimes \rho_2}. Hence we conclude {\rho_1 \boxtimes \rho_2} is irreducible. \Box

4. Conclusion

In particular, this means that any representation {\rho} of {G \times G} decomposes as

\displaystyle  \rho \simeq \bigoplus_{\alpha, \beta} \rho_\alpha \boxtimes \rho_\beta

and we even have

\displaystyle  \mathrm{Res}_{G}^{G\times G} \rho \simeq \bigoplus_{\alpha, \beta} \rho_\alpha \otimes \rho_\beta.

In the next post I’ll invoke this on the so-called finite regular representation to get the elegant results I promised at the end of Part 2.

Thanks to Dennis Gaitsgory, who taught me this in his course Math 55a. My notes for Math 55a can be found at my website.

Representation Theory, Part 2: Schur’s Lemma

Merry Christmas!

In the previous post I introduced the idea of an irreducible representation and showed that except in fields of low characteristic, these representations decompose completely. In this post I’ll present Schur’s Lemma at talk about what Schur and Maschke tell us about homomorphisms of representations.

1. Motivation

Fix a group {G} now, and consider all isomorphism classes of finite-dimensional representations of {G}. We’ll denote this set by {\mathrm{Irrep}(G)}. Maschke’s Theorem tells us that any finite-dimensional representation {\rho} can be decomposed as

\displaystyle  	\bigoplus_{\rho_\alpha \in \mathrm{Irrep}(G)} \rho_{\alpha}^{\oplus n_\alpha}

where {n_\alpha} is some nonnegative integer. This begs the question: what is {n_\alpha}? Is it even uniquely determined by {\rho}?

To answer this I first need to compute {\mathrm{Hom}_G(\rho, \pi)} for any two distinct irreducible representations {\rho} and {\pi}. One case is easy.

Lemma 1 Let {\rho} and {\pi} be non-isomorphic irreducible representations (not necessarily finite dimensional). Then there are no nontrivial homomorphisms {\phi : \rho \rightarrow \pi}. In other words, {\mathrm{Hom}_G(\rho, \pi) = \{0\}}.

I haven’t actually told you what it means for representations to be isomorphic, but you can guess — it just means that there’s a homomorphism of {G}-representations between them which is also a bijection of the underlying vector spaces.

Proof: Let {\phi : \rho_1 \rightarrow \rho_2} be a nonzero homomorphism. We can actually prove the following stronger results.

  • If {\rho_2} is irreducible then {\phi} is surjective.
  • If {\rho_1} is irreducible then {\phi} is injective.

Exercise Prove the above two results. (Hint: show that {\text{Im } \phi} and {\ker \phi} give rise to subrepresentations.)

Combining these two results gives the lemma because {\phi} is now a bijection, and hence an isomorphism. \Box

2. Schur’s Lemma

Thus we only have to consider the case {\rho \simeq \pi}. The result which relates these is called Schur’s Lemma, but is important enough that we refer to it as a theorem.

Theorem 2 (Schur’s Lemma) Assume {k} is algebraically closed. Let {\rho} be a finite dimensional irreducible representation. Then {\mathrm{Hom}_{G} (\rho, \rho)} consists precisely of maps of the form {v \mapsto \lambda v}, where {\lambda \in k}; the only possible maps are multiplication by a scalar. In other words,

\displaystyle  \mathrm{Hom}_{G} (\rho, \rho) \simeq k

and {\dim \mathrm{Hom}_G(\rho, \rho) = 1}.

This is NOT in general true without the algebraically closed condition, as the following example shows.

Example Let {k = {\mathbb R}}, let {V = {\mathbb R}^2}, and let {G = {\mathbb Z}_3} act on {V} by rotating every {\vec x \in {\mathbb R}^2} by {120^{\circ}} around the origin, giving a representation {\rho}. Then {\rho} is a counterexample to Schur’s Lemma.

Proof: This representation is clearly irreducible because the only point that it fixes is the origin, so there are no nontrivial subrepresentations.

We can regard now {\rho} as a map in {{\mathbb C}} which is multiplication by {e^{\frac{2\pi i}{3}}}. Then for any other complex number {\xi}, the map “multiplication by {\xi}” commutes with the map “multiplication by {e^{\frac{2\pi i}{3}}}”. So in fact

\displaystyle  \mathrm{Hom}_G(\rho, \rho) \simeq {\mathbb C}

which has dimension {2}. \Box

Now we can give the proof of Schur’s Lemma.

Proof: Clearly any map {v \mapsto \lambda v} respects the {G}-action.

Now consider any {T \in \mathrm{Hom}_G(\rho, \rho)}. Set {\rho = (V, \cdot_\rho)}. Here’s the key: because {k} is algebraically closed, and we’re over a finite dimensional vector space {V}, the map {T} has an eigenvalue {\lambda}. Hence by definition {V} has a subspace {V_\lambda} over which {T} is just multiplication by {\lambda}.

But then {V_\lambda} is a {G}-invariant subspace of {V}! Since {\rho} is irreducible, this can only happen if {V = V_\lambda}. That means {T} is multiplication by {\lambda} for the entire space {V}, as desired. \Box

3. Computing dimensions of homomorphisms

Since we can now compute the dimension of the {\mathrm{Hom}_G} of any two irreducible representations, we can compute the dimension of the {\mathrm{Hom}_G} for any composition of irreducibles, as follows.

Corollary 3 We have

\displaystyle  		\dim \mathrm{Hom}_G 		\left( \bigoplus_\alpha \rho_\alpha^{\oplus n_\alpha}, 		\bigoplus_\beta \rho_\beta^{\oplus m_\beta} \right) 		= \sum_{\alpha} n_\alpha m_\alpha

where the direct sums run over the isomorphism classes of irreducibles.

Proof: The {\mathrm{Hom}} just decomposes over each of the components as

\displaystyle  		\begin{aligned} 		\mathrm{Hom}_G 		\left( \bigoplus_\alpha \rho_\alpha^{\oplus n_\alpha}, 		\bigoplus_\beta \rho_\beta^{\oplus m_\beta} \right) 		&\simeq 		\bigoplus_{\alpha, \beta} 		\mathrm{Hom}_G(\rho_\alpha^{\oplus n_\alpha}, \rho_\beta^{\oplus m_\beta}) \\ 		&\simeq 		\bigoplus_{\alpha, \beta} 		\mathrm{Hom}_G(\rho_\alpha, \rho_\beta)^{\oplus n_\alpha m_\alpha}. 		\end{aligned}

Here we’re using the fact that {\mathrm{Hom}_G(\rho_1 \oplus \rho_2, \rho) = \mathrm{Hom}_G(\rho_1, \rho) \oplus \mathrm{Hom}_G(\rho_2, \rho)} (obvious) and its analog. The claim follows from our lemmas now. \Box

As a special case of this, we can quickly derive the following.

Corollary 4 Suppose {\rho = \bigoplus_\alpha \rho_\alpha^{n_\alpha}} as above. Then for any particular {\beta},

\displaystyle  n_\beta = \dim \mathrm{Hom}_G(\rho, \rho_\beta).

Proof: We have

\displaystyle  \dim \mathrm{Hom}_G(\rho, \rho_\beta) = n_\beta \mathrm{Hom}_G(\rho_\beta, \rho_\beta) = n_\beta

as desired. \Box

This settles the “unique decomposition” in the affirmative. Hurrah!

It might be worth noting that we didn’t actually need Schur’s Lemma if we were solely interested in uniqueness, since without it we would have obtained

\displaystyle  n_\beta = \frac{\dim \mathrm{Hom}_G(\rho, \rho_\beta)}{\dim \mathrm{Hom}_G(\rho_\beta, \rho_\beta)}.

However, the denominator in that expression is rather unsatisfying, don’t you think?

4. Conclusion

In summary, we have shown the following main results for finite dimensional representations of a group {G}.

  • Maschke’s Theorem: If {G} is finite and {\text{char } k} does not divide {\left\lvert G \right\rvert}, then any finite dimensional representation is a direct sum of irreducibles. This decomposition is unique up to isomorphism.
  • Schur’s Lemma: If {k} is algebraically closed, then {\mathrm{Hom}_G(\rho, \rho) \simeq k} for any irreducible {\rho}, while there are no nontrivial homomorphisms between non-isomorphic irreducibles.

In the next post I’ll talk about products of irreducibles, and use them in the fourth post to prove two very elegant results about the irreducibles, as follows.

  1. The number of (isomorphsim classes) of irreducibles {\rho_\alpha} is equal to the number of conjugacy classes of {G}.
  2. We have { \left\lvert G \right\rvert = \sum_\alpha \left( \dim \rho_\alpha \right)^2 }.

Thanks to Dennis Gaitsgory, who taught me this in his course Math 55a. My notes for Math 55a can be found at my website.

Representation Theory, Part 1: Irreducibles and Maschke’s Theorem

Good luck to everyone taking the December TST tomorrow!

The goal of this post is to give the reader a taste of representation theory, a la Math 55a. In theory, this post should be accessible to anyone with a knowledge of group actions and abstract vector spaces.

Fix a ground field {k} (for all vector spaces). In this post I will introduce the concept of representations and irreducible representations. Using these basic definitions I will establish Maschke’s Theorem, which tells us that irreducibles and indecomposables are the same thing.

1. Definition and Examples

Let {G} be a group.

Definition A representation of {G} consists of a pair {\rho = (V, \cdot_\rho)} where {V} is a vector space over {k} and {\cdot_\rho} is a (left) group action of {G} on {V} which is linear in {V}. If {V} is finite-dimensional then the dimension of {\rho} is just the dimension of {V}.

Explicitly the conditions on {\cdot_\rho} are that

\displaystyle \begin{aligned} 1 \cdot_\rho v &= v \\ g_1 \cdot_\rho (g_2 \cdot_\rho v) &= (g_1g_2) \cdot_\rho v \\ g_1 \cdot_\rho (v_1 + v_2) &= g \cdot_\rho v_1 + g \cdot_\rho v_2 \\ g_1 \cdot_\rho (cv) &= c(g \cdot_\rho v). \end{aligned}

Note that another equivalent phrasing is that {\rho} is a homomorphism from {G} to the general linear group {\text{GL}(V)}; however, we will not use this phrasing.

By abuse of notation, we occasionally refer to {\rho} by just its underlying vector space {V} in the case that {\cdot_\rho} is clear from context. We may also abbreviate {g \cdot_\rho v} as just {g \cdot v}.

A simple example of a nontrivial representation is the following.

Example If {V = k^{n}} and {G = S_n}, then an example of an action is {\rho = (V, \cdot_\rho)} is simply

\displaystyle \sigma \cdot_\rho \left<x_1, \dots, x_n\right> = \left<x_{\sigma(1)}, \dots, x_{\sigma(n)}\right>

meaning we permute the basis elements of {V}. We denote this representation by {\mathrm{refl}_n}.

Let us give another useful example.

Definition Let {X} be a set acted on by {G}. We define the vector space

\displaystyle \mathrm{Fun}(X) \overset{\text{def}}{=} \left\{ \text{maps } X \rightarrow k \right\}

with the standard addition of functions.

Example We define a representation on {\mathrm{Fun}(X)} by the following action: every {f \in \mathrm{Fun}(X)} gets sent to a {g \cdot f \in \mathrm{Fun}(X)} by

\displaystyle (g \cdot_{\mathrm{Fun}(X)} f)(x) = f\left( g^{-1} \cdot_X x \right).

By abuse of notation we will let {\mathrm{Fun}(X)} refer both to the vector space and the corresponding representation.

Now that we have two nontrivial examples, we also give a trivial example.

Definition Let {G} be a group. We define the trivial representation {\text{triv}_G}, or just {\text{triv}}, as the representation {\text{triv}_G = (k, \cdot_\text{triv})}, where

\displaystyle g \cdot_\text{triv} a = a

for every {a \in k}. In other words, {G} acts trivially on {k}.

2. Homomorphisms of Representations

First, as a good budding algebraist (not really) I should define how these representations talk to each other.

Definition Let {\rho_1 = (V_1, \cdot_{\rho_1})} and {\rho_2 = (V_2, \cdot_{\rho_2})} be representations of the same group {G}. A homomorphism of {G}-representations is a linear map {T : V_1 \rightarrow V_2} which respects the {G}-action: for any {g \in G} and {v \in V},

\displaystyle g \cdot_{\rho_2} T(v) = T\left( g \cdot_{\rho_1} v \right).

The set of all these homomorphisms is written {\mathrm{Hom}_G(\rho_1, \rho_2)}, which is itself a vector space over {k}.

(Digression: For those of you that know category theory, you might realize by now that representations correspond to functors from a category {\mathcal G} (corresponding to the group {G}) into {\textbf{Vect}_k} and that homomorphisms of representations are just natural transformations.)

To see an example of this definition in action, we give the following as an exercise.

Proposition 1 Let {\rho = \left( V, \cdot_\rho \right)}. We define the {G}-invariant space {\rho^G \subseteq V} to be

\displaystyle \rho^G \overset{\text{def}}{=} \left\{ v \in V \mid g \cdot_\rho v = v \; \forall g \in G \right\}.

Then there is a natural bijection of vector spaces {\mathrm{Hom}_G(\text{triv}_G, \rho) \simeq \rho^G}.

Proof: Let {\rho = (V, \cdot_\rho)}. The set {\mathrm{Hom}_G(\text{triv}_G, \rho)} consists of maps {T : k \rightarrow V} with

\displaystyle g \cdot_\rho T(a) = T(g \cdot_{\text{triv}} a) = T(a)

for every {a \in k}. Since {T : k \rightarrow V} is linear, it is uniquely defined by {T(1)} (since {T(a) = a T(1)} in general). So {g \cdot_\rho T(1) = T(1)}, i.e. {T(1) \in \rho^G}, is necessary and sufficient. Thus the bijection is just {T \mapsto T(1)}. \Box

This proposition will come up again at the end of Part 4.

3. Subrepresentations, Irreducibles, and Maschke’s Theorem

Now suppose I’ve got a representation {\rho = (V, \cdot_\rho)}.

Definition Suppose we have a subspace {W \subseteq V} which is {\rho}-invariant, meaning that {g \cdot_\rho w \in W} for every {w \in W} and {g \in G}. Then we can construct a representation of {G} on {W} by restricting the action to {W}:

\displaystyle \rho' = \left( W, \cdot_\rho|_W \right).

In that case the resulting {\rho} is called a subrepresentation of {V}.

Every {\rho} has an obvious subrepresentation, namely {\rho} itself, as well as a stupid subrepresenation on the zero-dimensional vector space {\{0\}}. But it’s the case that some representations have interesting subrepresentations.

Example Consider the representation {\mathrm{refl} = (k^n, \cdot)} of {S_n} on {k^n} defined in the first section. For all {n \ge 2}, {\mathrm{refl}} is not irreducible.

Proof: Consider the subspace {W \subset k^n} given by

\displaystyle W = \left\{ (x_1, x_2, \dots, x_n) \mid x_1 + \dots + x_n = 0. \right\}

then {W} is invariant under {\mathrm{refl}}, so we have a subrepresentation of {\mathrm{refl}}, which we’ll denote {\mathrm{refl}_0}. \Box

This motivates the ideas of irreducibles.

Definition A representation {\rho} is irreducible if it has no nontrivial subrepresentations.

Of course the first thing we ask is whether any representation decomposes as a product of irreducible representations. But what does it mean to compose two representations, anyways? It’s just the “natural” definition with the direct sum.

Definition Let {\rho_1 = (W_1, \cdot_{\rho_1})} and {\rho_2 = (W_2, \cdot_{\rho_2})} be representations and suppose we have {V = W_1 \oplus W_2}. Then we define the representation {\rho = \rho_1 \oplus \rho_2} by {\rho = (V, \cdot_\rho)} where

\displaystyle g \cdot_\rho (w_1 + w_2) = (g \cdot_{\rho_1} w_1) + (g \cdot_{\rho_2} w_2).

Just like every integer decomposes into prime factors, we hope that every representation decomposes into irreducibles. But this is too much to hope for.

Example Let {G = S_2}, let {k = \mathbb F_2} be the finite field of order {2} (aka {{\mathbb Z}/2{\mathbb Z}}), and consider {\mathrm{refl} = (k^2, \cdot)}, which is not irreducible. However, we claim that we cannot write {\mathrm{refl} = \rho_1 \oplus \rho_2} for any nontrivial {\rho_1} and {\rho_2}.

Proof: This is a good concrete exercise.

Assume not, and let {V_1} and {V_2} be the underlying vector spaces of {\rho_1} and {\rho_2}. By nontriviality, {\dim V_1 = \dim V_2 = 1}, and in particular we have that as sets, {\left\lvert V_1 \right\rvert = \left\lvert V_2 \right\rvert = 2}. Take the only nonzero elements {(a,b) \in V_1} and {(c,d) \in V_2}. Since {V_1} is invariant under {\mathrm{refl}}, {(b,a) \in V_1}, so {(a,b) = (b,a) \implies (a,b) = (1,1) \in V_1}. Similarly, {(1,1) \in V_2}, which is impossible. \Box

So we hoped for perhaps too much. However, with seemingly trivial modifications we can make the above example work.

Example In the same example as above, suppose we replace {k} with any field which does not have characteristic {2}. Then {\rho} does decompose.

Proof: Consider the following two subspaces of {V = k^2}:

\displaystyle \begin{aligned} W_1 &= \left\{ \left<a, a\right> \mid a \in k \right\} \\ W_2 &= \left\{ \left<a, -a\right> \mid a \in k \right\}. \end{aligned}

It’s easy to see that both {W_1} and {W_2} are both invariant under {\rho}. Moreover, if {\text{char } k \neq 2} then we in fact have

\displaystyle V = W_1 \oplus W_2

because {\left<x,y\right> = \frac{1}{2} \left<x+y, x+y\right> + \frac{1}{2} \left<x-y, y-x\right>} for any {x,y \in k}. So if we let {\rho_1 = (W_1, \cdot_{\rho_1})} be the subrepresentation corresponding to {W_1}, and define {\rho_2} on {W_2} similarly, then we have {\rho = \rho_1 \oplus \rho_2}. \Box

Thus the only thing in the way of the counterexample was the fact that {\text{char } k = 2}. And it turns out in general this is the only obstacle, a result called Maschke’s Theorem.

Theorem 2 (Maschke’s Theorem) Suppose that {G} is a finite group, and {\text{char } k} does not divide {\left\lvert G \right\rvert}. Then every finite-dimensional representation decomposes as a direct sum of irreducibles.

Before proceeding to the proof, I’ll draw an analogy between the proof that every positive integer {m} decomposes as the product of primes. We use by strong induction on {m}; if {m} is prime we are done, and if {m} is composite there is a nontrivial divisor {d \mid m}, so we apply the inductive hypothesis to {d} and {m/d} and combine these factorizations. We want to mimic the proof above in our proof of Maschke’s Theorem, but we have a new obstacle: we have to show that somehow, we can “divide”.

So why is it that we can divide in certain situations? The idea is that we want to be able to look at an “average” of the form

\displaystyle \frac{1}{\left\lvert G \right\rvert} \sum_{g \in G} g \cdot v

because this average has the nice property of being {G}-invariant. We’ll use this to obtain our proof of Maschke’s Theorem.

Proof: We proceed by induction on the dimension of the representation {\rho}. Let {\rho = (V, \cdot_\rho)} be a representation and assume its not irreducible, so it has a nontrivial subspace {W} which is {\rho}-invariant. It suffices to prove that there exists a subspace {W' \subset V} such that {W'} is also {\rho}-invariant and {V = W \oplus W'}, because then we can apply the inductive hypothesis to the subrepresentations induced by {W} and {W'}.

Let {\pi : V \rightarrow W} be any projection of {V} onto {W}. We consider the averaging map {T : V \rightarrow V} by

\displaystyle T(v) = \frac{1}{\left\lvert G \right\rvert} \sum_{g \in G} g^{-1} \cdot_\rho \pi(g \cdot_\rho v).

We’ll use the following properties of the map.

Exercise Show that the map {T} has the following three properties.

  • For any {w \in W}, {T(w) = w}.
  • For any {v \in V}, {T(w) \in W}.
  • {T \in \mathrm{Hom}_G(\rho, \rho)}.

As with any projection map {T}, we must have {V = \ker T \oplus \text{Im } T}. But {\text{Im } T = W}. Moreover, because the map {T} is {G}-invariant, it follows that {\ker T} is {\rho}-invariant. Hence taking {W' = \ker T} completes the proof. \Box

This completes our proof of Maschke’s Theorem, telling us how all irreducibles decompose. Said another way, Maschke’s Theorem tells us that any finite-dimensional representation {\rho} can be decomposed as

\displaystyle \bigoplus_{\rho_\alpha \in \mathrm{Irrep}(G)} \rho_{\alpha}^{\oplus n_\alpha}

where {n_\alpha} is some nonnegative integer, and {\mathrm{Irrep}(G)} is the set of all (isomorphism classes of) irreducibles representations.

You may wonder whether the decomposition is unique, and if so what we can say about the “exponents” {n_\alpha}. In the next post I’ll show how to compute the exponents {n_\alpha} (which in particular gives uniqueness).

Thanks to Dennis Gaitsgory, who taught me this in his course Math 55a. Thanks also to the MOPpers at PUMaC 2014 who let me run this by them during a sleepover; several improvements were made to the original draft as a result. My notes for Math 55a can be found at my website. Thanks also to N for pointing out an error in my proof of Maschke’s Theorem.