Uniqueness of Solutions for DiffEq’s

Let {V} be a normed finite-dimensional real vector space and let {U \subseteq V} be an open set. A vector field on {U} is a function {\xi : U \rightarrow V}. (In the words of Gaitsgory: “you should imagine a vector field as a domain, and at every point there is a little vector growing out of it.”)

The idea of a differential equation is as follows. Imagine your vector field specifies a velocity at each point. So you initially place a particle somewhere in {U}, and then let it move freely, guided by the arrows in the vector field. (There are plenty of good pictures online.) Intuitively, for nice {\xi} it should be the case that the trajectory resulting is unique. This is the main take-away; the proof itself is just for completeness.

This is a so-called differential equation:

Definition 1

Let {\gamma : (-\varepsilon, \varepsilon) \rightarrow U} be a continuous path. We say {\gamma} is a solution to the differential equation defined by {\xi} if for each {t \in (-\varepsilon, \varepsilon)} we have

\displaystyle  \gamma'(t) = \xi(\gamma(t)).

Example 2 (Examples of DE’s)

Let {U = V = \mathbb R}.

  1. Consider the vector field {\xi(x) = 1}. Then the solutions {\gamma} are just {\gamma(t) = t+c}.
  2. Consider the vector field {\xi(x) = x}. Then {\gamma} is a solution exactly when {\gamma'(t) = \gamma(t)}. It’s well-known that {\gamma(t) = c\exp(t)}.

Of course, you may be used to seeing differential equations which are time-dependent: i.e. something like {\gamma'(t) = t}, for example. In fact, you can hack this to fit in the current model using the idea that time is itself just a dimension. Suppose we want to model {\gamma'(t) = F(\gamma(t), t)}. Then we instead consider

\displaystyle  \xi : V \times \mathbb R \rightarrow V \times \mathbb R \qquad\text{by}\qquad \xi(v, t) = (F(v,t), 1)

and solve the resulting differential equation over {V \times \mathbb R}. This does exactly what we want. Geometrically, this means making time into another dimension and imagining that our particle moves at a “constant speed through time”.

The task is then mainly about finding which conditions guarantee that our differential equation behaves nicely. The answer turns out to be:

Definition 3

The vector field {\xi : U \rightarrow V} satisfies the Lipschitz condition if

\displaystyle  \left\lVert \xi(x')-\xi(x'') \right\rVert \le \Lambda \left\lVert x'-x'' \right\rVert

holds identically for some fixed constant {\Lambda}.

Note that continuously differentiable implies Lipschitz.

Theorem 4 (Picard-Lindelöf)

Let {V} be a finite-dimensional real vector space, and let {\xi} be a vector field on a domain {U \subseteq V} which satisfies the Lipschitz condition.

Then for every {x_0 \in U} there exists {(-\varepsilon,\varepsilon)} and {\gamma : (-\varepsilon,\varepsilon) \rightarrow U} such that {\gamma'(t) = \xi(\gamma(t))} and {\gamma(0) = x_0}. Moreover, if {\gamma_1} and {\gamma_2} are two solutions and {\gamma_1(t) = \gamma_2(t)} for some {t}, then {\gamma_1 = \gamma_2}.

In fact, Peano’s existence theorem says that if we replace Lipschitz continuity with just continuity, then {\gamma} exists but need not be unique. For example:

Example 5 (Counterexample if {\xi} is not differentiable)

Let {U = V = \mathbb R} and consider {\xi(x) = x^{\frac23}}, with {x_0 = 0}. Then {\gamma(t) = 0} and {\gamma(t) = \left( t/3 \right)^3} are both solutions to the differential equation

\displaystyle  \gamma'(t) = \gamma(t)^{\frac 23}.

Now, for the proof of the main theorem. The main idea is the following result (sometimes called the contraction principle).

Lemma 6 (Banach Fixed-Point Theorem)

Let {(X,d)} be a complete metric space. Let {f : X \rightarrow X} be a map such that {d(f(x_1), f(x_2)) < \frac{1}{2} d(x_1, x_2)} for any {x_1, x_2 \in X}. Then {f} has a unique fixed point.

For the proof of the main theorem, we are given {x_0 \in V}. Let {X} be the metric space of continuous functions from {(-\varepsilon, \varepsilon)} to the complete metric space {\overline{B}(x_0, r)} which is the closed ball of radius {r} centered at {x_0}. (Here {r > 0} can be arbitrary, so long as it stays in {U}.) It turns out that {X} is itself a complete metric space when equipped with the sup norm

\displaystyle  d(f, g) = \sup_{t \in (-\varepsilon, \varepsilon)} \left\lVert f(t)-g(t) \right\rVert.

This is well-defined since {\overline{B}(x_0, r)} is compact.

We wish to use the Banach theorem on {X}, so we’ll rig a function {\Phi : X \rightarrow X} with the property that its fixed points are solutions to the differential equation. Define it by, for every {\gamma \in X},

\displaystyle  \Phi(\gamma) : t \mapsto x_0 + \int_0^t \xi(\gamma(s)) \; ds.

This function is contrived so that {(\Phi\gamma)(0) = x_0} and {\Phi\gamma} is both continuous and differentiable. By the Fundamental Theorem of Calculus, the derivative is exhibited by

\displaystyle  (\Phi\gamma)'(t) = \left( \int_0^t \xi(\gamma(s)) \; ds \right)' = \xi(\gamma(t)).

In particular, fixed points correspond exactly to solutions to our differential equation.

A priori this output has signature {\Phi\gamma : (-\varepsilon,\varepsilon) \rightarrow V}, so we need to check that {\Phi\gamma(t) \in \overline{B}(x_0, r)}. We can check that

\displaystyle  \begin{aligned} \left\lVert (\Phi\gamma)(t) - x_0 \right\rVert &=\left\lVert \int_0^t \xi(\gamma(s)) \; ds \right\rVert \\ &\le \int_0^t \left\lVert \xi(\gamma(s)) \; ds \right\rVert \\ &\le t \max_{s \in [0,t]} \left\lVert \xi\gamma(s) \right\rVert \\ &< \varepsilon \cdot A \end{aligned}

where {A = \max_{x \in \overline{B}(x_0,r)} \left\lVert \xi(x) \right\rVert}; we have {A < \infty} since {\overline{B}(x_0,r)} is compact. Hence by selecting {\varepsilon < r/A}, the above is bounded by {r}, so {\Phi\gamma} indeed maps into {\overline{B}(x_0, r)}. (Note that at this point we have not used the Lipschitz condition, only that {\xi} is continuous.)

It remains to show that {\Phi} is contracting. Write

\displaystyle  \begin{aligned} \left\lVert (\Phi\gamma_1)(t) - (\Phi\gamma_2)(t) \right\rVert &= \left\lVert \int_{s \in [0,t]} \left( \xi(\gamma_1(s))-\xi(\gamma_2(s)) \right) \right\rVert \\ &= \int_{s \in [0,t]} \left\lVert \xi(\gamma_1(s))-\xi(\gamma_2(s)) \right\rVert \\ &\le t\Lambda \sup_{s \in [0,t]} \left\lVert \gamma_1(s)-\gamma_2(s) \right\rVert \\ &< \varepsilon\Lambda \sup_{s \in [0,t]} \left\lVert \gamma_1(s)-\gamma_2(s) \right\rVert \\ &= \varepsilon\Lambda d(\gamma_1, \gamma_2) . \end{aligned}

Hence once again for {\varepsilon} sufficiently small we get {\varepsilon\Lambda \le \frac{1}{2}}. Since the above holds identically for {t}, this implies

\displaystyle  d(\Phi\gamma_1, \Phi\gamma_2) \le \frac{1}{2} d(\gamma_1, \gamma_2)

as needed.

This is a cleaned-up version of a portion of a lecture from Math 55b in Spring 2015, instructed by Dennis Gaitsgory.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s