Appendix#
Throughout these notes, there have been times where a proof or more technical discussion has been omitted due to time limitations. Some of these proofs are included here, for interest. Note that this chapter is non-examinable, with the exception of the statement of Theorem A.3, which has already been stated earlier in the notes (Theorem 5.2).
A.1 Continuous functions are integrable#
Theorem A.1
Let \(f:[a,b]\to \mathbb{R}\) be continuous. Then \(f\) is Riemann integrable.
Proof. Since \(f\) is continuous, it is bounded, by Theorem 3.5, and so we may consider its upper and lower integrals.
Let \(\varepsilon>0\). We aim to show that \(U(f)-L(f)<\varepsilon\). The result will then follow by Proposition 6.1.
Since \(f\) is continuous, for each \(x\in[a,b]\) there exists \(\delta_x>0\) such that
Here, we use the subscript \(x\) to emphasise that \(\delta\) may depend on \(x\).
Claim. \(\delta\) can be chosen independently of \(x\).
Proof of claim. If not, then for each \(n\in\mathbb{N}\) we can choose \(x_n,y_n\in[a,b]\) such that
and
Now \((x_n)\) is a bounded sequence, and so must have a convergent subsequence. Suppose that \((x_{n_j})\) is a subsequence converging to \(l\). Then by equation (1), we must also have \(y_{n_j}\rightarrow l\). By continuity, \(f(x_{n_j})-f(y_{n_j})\rightarrow 0\), which contradicts (2).
We have shown that there is a single number \(\delta>0\) such that for any \(x,y\in[a,b]\),
Let \(a=x_0<x_1<x_2<\ldots<x_n=b\) be chosen so that \(x_k-x_{k-1}<\delta\) for \(k=1,2,\ldots,n\) (this is always possible if we take \(n\) large enough), and let \(P=\{x_0,x_1,\ldots,x_n\}\). By continuity, \(f\) attains its maximum and minimum on each interval \([x_{k-1},x_k]\). Let \(c_k,d_k\in[x_{k-1},x_k]\) such that
and
Note that \(f(c_k)<f(d_k)\), and also, since \(0<x_k-x_{k-1}<\delta\), we have \(f(c_k)-f(d_k)<\frac{\varepsilon}{b-a}\). The lower and upper sums associated with \(P\) are therefore
Hence
Also, since \(0<x_k-x_{k-1}<\delta\), we have \(\ds f(c_k)-f(d_k)<\frac{\varepsilon}{b-a}\). Thus \begin{align*} U(f) - L(f) &\leq \sum_{k=1}^nf(d_k)(x_k-x_{k-1}) - \sum_{k=1}^nf(c_k)(x_k-x_{k-1})\ &= \sum_{k=1}^n(f(c_k)-f(d_k))(x_k-x_{k-1}) \ &\leq \sum_{k=1}^n\frac{\varepsilon}{b-a}(x_k-x_{k-1}) = \varepsilon. \end{align*} Since \(\varepsilon\) was arbitrary, it follows that \(U(f)=L(f)\), and so \(f\) is Riemann integrable.
A.2 Integration and differentiation under uniform#
In Chapter 5, we saw that continuity is preserved under uniform limits. We also saw that, provided certain conditions are met, differentiability is preserved by uniform limits. In this section, we will prove this result, by first proving a similar result about integration.
The following result is called the {\em uniform limit theorem for integrals}.
Theorem A.2 (Uniform limit theorem for integrals)
Let \(f_n :[a,b]\rightarrow \mathbb{R}\) be a cont:inuous function, for \(n\in\mathbb{N}\). Suppose the sequence \((f_n)\) converges uniformly to a function \(f\). Then
Proof. By Theorem 5.1, the function \(f\) is continuous, and therefore integrable. Let \(\varepsilon >0\). Since \((f_n)\) converges uniformly to \(f\), we have \(N\in \mathbb{N}\) such that
whenever \(n\geq N\), for all \(t\in [a,b]\).
Hence by Proposition 6.3 and Lemma 6.2, for \(n\geq N\) we have
The result now follows.
Thus we can, \emph{under suitable conditions}, swap limits and integral signs. This is frequently useful.
The result actually still holds if each \(f_n\) is Riemann integrable rather than continuous, but the proof is much more involved, and the above is enough for our purposes.
The following example shows that we do need {\emph{uniform}} convergence to be able to swap limits and integrals.
Example 1
Let \(f_n:[0,1]\to\mathbb{R}\), for \(n\in\mathbb{N}\) be defined by
(i) Show that \((f_n)\) converges pointwise to the zero function.
(ii) Calculate \(\lim_{n\to\infty}\int_0^1 f_n(x)dx.\)
(iii) Does \((f_n)\) converge uniformly?
Solution. (i) Clearly, if \(x=0\) or \(x=1\), then \(f_n(x)=0\) for all \(n\) and so \(f_n\rightarrow 0\) pointwise. So now suppose \(x\in (0,1)\). Then \(0<1-x^2<1\). So we can find a positive number \(h\) such that \(1-x^2=1/(1+h)\). Using the binomial expansion for \((1+h)^n\), we see that \((1+h)^n>\frac{n(n-1)}{2}h^2\), so
Since \(\frac{2x}{(n-1)h^2}\to 0\) as \(n\to\infty\), by the sandwich rule, \(f_n(x)\to 0\) as \(n\to \infty\). Thus \((f_n)\) converges pointwise to the zero function.
(ii) Substitute \(u=x^2\), to get
So
(iii) From the previous parts we see that
Thus \((f_n)\) cannot converge uniformly (otherwise this example would contradict the uniform limit theorem).
Our first immediate application is to differentiation. Indeed, it is natural to ask at this point about swapping limits and differentiation.
Theorem A.3 (Uniform limit theorem for differentiation)
Consider differentiable functions \(f_n:[a,b]\rightarrow \mathbb{R}\). Suppose \((f_n)\) converges pointwise to a function \(f\), each \(f_n'\) is continuous and the sequence of derivatives \((f'_n)\) converges uniformly to a function \(g\). Then \(f\) is differentiable, and \(f'=g\).
Proof. By the fundamental theorem of calculus, for each \(x\in [a,b]\)
for each \(n\in \mathbb{N}\). So
Since \((f_n)\) has pointwise limit \(f\) on \([a,b]\), and \((f'_n)\) has uniform limit \(g\) on \([a,x]\), using the uniform limit theorem for integrals, this gives
for all \(x\in [a,b]\). Applying the fundamental theorem of calculus, \(f\) is differentiable and
for all \(x\in [a,b]\), as required.
Thus we can, {\emph{under suitable conditions}}, swap limits and differentiation.
A.3 \(e\) and the exponential function#
Definition 1 (Exponential function)
We define the {\em exponential function}, \(\exp:\mathbb{R}\to\mathbb{R}\), by
In Example 5.7, we proved that this series converges uniformly on bounded intervals, and that \(\exp\) is continuous, infinitely differentiable and satisfies \(\exp'(x)=\exp(x)\) for all \(x\in\mathbb{R}\).
In MAS107, you saw\footnote{See Example 3.12 in the MAS107 lecture notes.} that the limit
exists and the limit was taken as a definition of the number \(e\).
We would like to prove that \(\exp(x)=e^x\) for all \(x\in\mathbb{R}\). This needs to be done in stages.
Proposition 1
Let \(a,b\in \mathbb{R}\). Then
Proof. Let \(c\in \mathbb{R}\). Define \(f:\mathbb{R}\to\mathbb{R}\) by
Then by the product rule
So
where \(A\) is constant. By the series definition, \(\exp (0)=1\), so letting \(x=0\), we get \(A = \exp (c)\). So
Now let \(c=a+b\), \(x=-b\), giving
as required.
Now we relate the exponential function to the limit expression for \(e\).
Proposition 2
Let \(x\in \mathbb{R}\). Then
Proof. We will give the proof for \(x\geq 0\). The case where \(x<0\) can be treated similarly. We write
One can show that for each \(x\in\mathbb{R}\), \((t_n(x))\) is a monotonic increasing and bounded sequence, so it is convergent.
We want to show that \(\lim_{n\to\infty} t_n(x)=\lim_{n\to\infty} s_n(x)\).
Using the binomial theorem,
Thus we see that \(t_n(x)\leq s_n(x)\) for all \(n,x\) and so
On the other hand, we also have, for \(n\geq m\),
So, using the algebra of limits,
Now taking the limit as \(m\to\infty\), we have
Thus \(\lim_{n\to\infty} t_n(x)=\lim_{n\to\infty} s_n(x)\), as required.
In particular,
Proposition 3
The number \(e\) is irrational and \(2< e\leq 3\).
Proof. We first show that \(2< e\leq 3\). %(This was already seen in semester 1 in Example 2.3.6, but let’s check from the series point of view.) We have
Certainly
Now \(n! >2^{n-1}\) for \(n\geq 3\), so
by the formula for the sum of a geometric series.
Now we prove \(e\) is irrational. We work by contradiction, so suppose that \(e\) is rational. Since \(e\) is positive, we then have \(e= \frac{a}{b}\) where \(a,b\in \mathbb{N}\). So \(be=a\), and so \(b!e \in \mathbb{N}\).
\noindent But
Notice that the terms up to \(b!/b!\) are all integers. Let
Notice that
By the formula for the sum of a geometric series,
In particular, \(R\) is not a natural number. Therefore \(b!e\) is not a natural number. But this is a contradiction. So \(e\) is irrational.