text
stringlengths 10k
10k
| question
stringlengths 13
1.8k
| proof
stringlengths 0
4.54k
|
---|---|---|
Exercise 4.3.15 (i) Show that there is a set \( A \) of reals of cardinality \( \mathfrak{c} \) such that \( A \cap C \) is countable for every closed, nowhere dense set. (Such a set \( A \) is called a Lusin set.)
(ii) Show that every Lusin set is a strong measure zero set.
Does \( \mathbf{{CH}} \) hold for coanalytic sets? This cannot be decided in \( \mathbf{{ZFC}} \) . However, in ZFC we can say something about the cardinalities of coanalytic sets-a coanalytic set is either countable or is of cardinality \( {\aleph }_{1} \) or \( \mathfrak{c} \) . We prove these facts now.
Let \( T \) be a well-founded tree on \( \mathbb{N} \) . Recall the definition of the rank function \( {\rho }_{T} : T \rightarrow \mathbf{{ON}} \) given in Chapter 1:
\[
{\rho }_{T}\left( u\right) = \sup \left\{ {{\rho }_{T}\left( v\right) + 1 : u \prec v, v \in T}\right\}, u \in T.
\]
(We take \( \sup \left( \varnothing \right) = 0 \) .) Note that \( {\rho }_{T}\left( u\right) = 0 \) if \( u \) is terminal in \( T \) .
We extend this notion for ill-founded trees too. Let \( T \) be an ill-founded tree and \( s \in {\mathbb{N}}^{ < \mathbb{N}} \) . Define
\[
{\rho }_{T}\left( s\right) = \left\{ \begin{array}{ll} 0 & \text{ if }s \notin T, \\ {\rho }_{{T}_{s}}\left( e\right) & \text{ if }s \in T\& {T}_{s}\text{ is well-founded,} \\ {\omega }_{1} & \text{ otherwise. } \end{array}\right.
\]
Note that \( T \) is well-founded if and only if \( {\rho }_{T}\left( e\right) < {\omega }_{1} \) .
Lemma 4.3.16 Let \( T \) be a tree on \( \mathbb{N} \times \mathbb{N} \) and \( \xi < {\omega }_{1} \) . For every \( s \in {\mathbb{N}}^{ < \mathbb{N}} \) ,
\[
{C}_{s}^{\xi } = \left\{ {\alpha \in {\mathbb{N}}^{\mathbb{N}} : {\rho }_{T\left\lbrack \alpha \right\rbrack }\left( s\right) \leq \xi }\right\}
\]
is Borel.
Proof. We prove the result by induction on \( \xi \) . Note that
\[
{C}_{s}^{0} = \left\{ {\alpha \in {\mathbb{N}}^{\mathbb{N}} : \forall i\left( {\left( {\alpha \mid \left( {\left| s\right| + 1}\right), s\widehat{}i}\right) \notin T}\right) }\right\} .
\]
So, \( {C}_{s}^{0} \) is Borel (in fact closed) for all \( s \) . Since for any countable ordinal \( \xi > 0 \)
\[
{C}_{s}^{\xi } = \mathop{\bigcap }\limits_{i}\mathop{\bigcup }\limits_{{\eta < \xi }}{C}_{{s}^{ \frown }i}^{\eta }
\]
the proof is easily completed by transfinite induction.
Theorem 4.3.17 Every coanalytic set is a union of \( {\aleph }_{1} \) Borel sets.
Proof. Let \( X \) be Polish and \( C \subseteq X \) coanalytic. By the Borel isomorphism theorem (3.3.13), without any loss of generality we may assume that \( X = {\mathbb{N}}^{\mathbb{N}} \) . By 4.1.20, there is a tree \( T \) on \( \mathbb{N} \times \mathbb{N} \) such that
\[
\alpha \in C \Leftrightarrow T\left\lbrack \alpha \right\rbrack \text{is well-founded.}
\]
So,
\[
\alpha \in C \Leftrightarrow {\rho }_{T\left\lbrack \alpha \right\rbrack }\left( e\right) < {\omega }_{1}
\]
Therefore,
\[
C = \mathop{\bigcup }\limits_{{\xi < {\omega }_{1}}}{C}_{e}^{\xi }
\]
where the \( {C}_{e}^{\xi } \) are as in 4.3.16.
The sets \( {C}_{e}^{\xi },\xi < {\omega }_{1} \), defined in the above proof are called the constituents of \( C \) . Since \( \mathbf{{CH}} \) holds for Borel sets, we now have the following result.
Theorem 4.3.18 A coanalytic set is either countable or of cardinality \( {\aleph }_{1} \) or \( \mathfrak{c} \) .
The following question remains: Does \( \mathbf{{CH}} \) hold for coanalytic sets? Another related question is, Is there an uncountable coanalytic set that does not contain a perfect set (equivalently, an uncountable Borel set)? Gödel[45] showed that in the universe \( L \) of constructible sets, which is a model of \( \mathbf{{ZFC}} \), there is an uncountable coanalytic set that does not contain a perfect set. (See also [49], p. 529.) On the other hand, under "analytic determinacy" ([53], p. 206) every uncountable coanalytic set contains a perfect set. Hence under this hypothesis every uncountable coanalytic set is of cardinality \( \mathfrak{c} \) . "Analytic determinacy" can be proved from the existence of large cardinals. Thus, the statement "there is an uncountable coanalytic set not containing a perfect set" cannot be decided in ZFC. Any further discussion on this topic is beyond the scope of these notes.
## 4.4 The First Separation Theorem
The separartion theorems and the dual results - the reduction theorems - are among the most important results on analytic and coanalytic sets, with far-reaching consequences on Borel sets.
Theorem 4.4.1 (The first separation theorem for analytic sets) Let \( A \) and \( B \) be disjoint analytic subsets of a Polish space \( X \) . Then there is a Borel set \( C \) such that
\[
A \subseteq C\text{and}B\bigcap C = \varnothing \text{.}
\]
\( \left( *\right) \)
(If \( \left( \star \right) \) is satisfied, we say that \( C \) separates \( A \) from \( B \) .)
The proof of this theorem is based on the following combinatorial lemma.
Lemma 4.4.2 Suppose \( E = \mathop{\bigcup }\limits_{n}{E}_{n} \) cannot be separated from \( F = \mathop{\bigcup }\limits_{m}{F}_{m} \) by a Borel set. Then there exist \( m, n \) such that \( {E}_{n} \) cannot be separated from \( {F}_{m} \) by a Borel set.
Proof. Suppose for every \( m, n \) there is a Borel set \( {C}_{mn} \) such that
\[
{E}_{n} \subseteq {C}_{mn}\text{ and }{F}_{m}\bigcap {C}_{mn} = \varnothing .
\]
It is fairly easy to check that the Borel set
\[
C = \mathop{\bigcup }\limits_{n}\mathop{\bigcap }\limits_{m}{C}_{mn}
\]
separates \( E \) from \( F \) .
Proof of 4.4.1. Let \( A \) and \( B \) be two disjoint analytic subsets of \( X \) . Suppose there is no Borel set \( C \) such that
\[
A \subseteq C\text{ and }B\bigcap C = \varnothing .
\]
We shall get a contradiction. Let \( f : {\mathbb{N}}^{\mathbb{N}} \rightarrow A \) and \( g : {\mathbb{N}}^{\mathbb{N}} \rightarrow B \) be continuous surjections. We shall get \( \alpha ,\beta \in {\mathbb{N}}^{\mathbb{N}} \) such that \( f\left( {\sum \left( {\alpha \mid n}\right) }\right) \) cannot be separated from \( g\left( {\sum \left( {\beta \mid n}\right) }\right) \) by a Borel set for any \( n \in \mathbb{N} \) .
We first complete the proof assuming that \( \alpha ,\beta \) satisfying the above properties have been defined. Since \( A \) and \( B \) are disjoint, \( f\left( \alpha \right) \neq g\left( \beta \right) \) . Since \( f \) and \( g \) are continuous, there exist disjoint open sets \( U \) and \( V \) containing \( f\left( \alpha \right) \) and \( g\left( \beta \right) \) respectively. By the continuity of \( f \) and \( g \), there exists an \( n \in \) \( \mathbb{N} \) such that \( f\left( {\sum \left( {\alpha \mid n}\right) }\right) \subseteq U \) and \( g\left( {\sum \left( {\beta \mid n}\right) }\right) \subseteq V \) . In particular, \( f\left( {\sum \left( {\alpha \mid n}\right) }\right) \) is separated from \( g\left( {\sum \left( {\beta \mid n}\right) }\right) \) by a Borel set. This is a contradiction.
Definition of \( \alpha ,\beta \) : We proceed by induction.
Since \( A = \bigcup f\left( {\sum \left( n\right) }\right) \) and \( B = \bigcup g\left( {\sum \left( m\right) }\right) \), by 4.4.2 there exist \( \alpha \left( 0\right) \) and \( \beta \left( 0\right) \) such that \( f\left( {\sum \left( {\alpha \left( 0\right) }\right) }\right) \) cannot be separated from \( g\left( {\sum \left( {\beta \left( 0\right) }\right) }\right) \) by a Borel set. Suppose \( \alpha \left( 0\right) ,\alpha \left( 1\right) ,\ldots ,\alpha \left( k\right) \) and \( \beta \left( 0\right) ,\beta \left( 1\right) ,\ldots ,\beta \left( k\right) \) satisfying the above conditions have been defined. Since
\[
f\left( {\sum \left( {\alpha \left( 0\right) ,\alpha \left( 1\right) ,\ldots ,\alpha \left( k\right) }\right) }\right) = \mathop{\bigcup }\limits_{n}f\left( {\sum \left( {\alpha \left( 0\right) ,\alpha \left( 1\right) ,\ldots ,\alpha \left( k\right), n}\right) }\right)
\]
and
\[
g\left( {\sum \left( {\beta \left( 0\right) ,\beta \left( 1\right) ,\ldots ,\beta \left( k\right) }\right) }\right) = \mathop{\bigcup }\limits_{m}g\left( {\sum \left( {\beta \left( 0\right) ,\beta \left( 1\right) ,\ldots ,\beta \left( k\right), m}\right) }\right) ,
\]
by 4.4.2 again we get \( \alpha \left( {k + 1}\right) \) and \( \beta \left( {k + 1}\right) \) with the desired properties. ∎
Theorem 4.4.3 (Souslin) A subset A of a Polish space \( X \) is Borel if and only if it is both analytic and coanalytic; i.e., \( {\mathbf{\Delta }}_{1}^{1}\left( X\right) = {\mathcal{B}}_{X} \) .
Proof. The "only if" part is trivial. Suppose both \( A \) and \( {A}^{c} \) are analytic. Since \( A \) is the only set separating \( A \) from \( {A}^{c} \), the "if part" immediately follows from 4.4.1.
Proposition 4.4.4 Suppose \( {A}_{0},{A}_{1},\ldots \) are pairwise disjoint analytic subsets of a Polish space \( X \) . Then there exist pairwise disjoint Borel sets \( {B}_{0},{B}_{1},\ldots \) such that \( {B}_{n} \supseteq {A}_{n} \) for all \( n \) .
Proof. By 4.4.1, for each \( n \) there is a Borel set \( {C}_{n} \) such that
\[
{A}_{n} \subseteq {C}_{n}\text{ and }{C}_{n} \cap \mathop{\bigcup }\limits_{{m \neq n}}{A}_{m} = \varnothing .
\]
Take
\[
{B}_{n} = {C}_{n} \cap \mathop{\bigcap }\limits_{{m \neq n}}\left( {X \smallsetminus {C}_{m}}\right)
\]
Theorem 4.4.5 Let \( E \subseteq X \times X \) be an analytic equivalence relation on a Polish space \( X \) . Suppose \( A \) and \( B \) are disjoint analytic subsets of \( X \) . Assume that \( B \) is invariant with respect to \( E \) (i.e., \( B \) is a union of \( E \) - equivalence classes). Then there is an \( E \) -invariant Borel set \( C \) separating \( A \) from \( B \) .
Proof. First we note the following. Let \( D \) be an analytic subset of \( X \) and \( {D}^{ * } \) the smallest invariant set containing \( D \) . Since
\[
{D}^{ * } = {\pi }_{X}\left( {E\bigcap \left( {D \times X}\right)
|
Exercise 4.3.15 (i) Show that there is a set \( A \) of reals of cardinality \( \mathfrak{c} \) such that \( A \cap C \) is countable for every closed, nowhere dense set. (Such a set \( A \) is called a Lusin set.)
|
null
|
Exercise 4.2.5 Show that the discriminant is well-defined. In other words, show that given \( {\omega }_{1},{\omega }_{2},\ldots ,{\omega }_{n} \) and \( {\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{n} \), two integral bases for \( K \), we get the same discriminant for \( K \) .
We can generalize the notion of a discriminant for arbitrary elements of \( K \) . Let \( K/\mathbb{Q} \) be an algebraic number field, a finite extension of \( \mathbb{Q} \) of degree \( n \) . Let \( {\sigma }_{1},{\sigma }_{2},\ldots ,{\sigma }_{n} \) be the embeddings of \( K \) . For \( {a}_{1},{a}_{2},\ldots ,{a}_{n} \in K \) we can define \( {d}_{K/\mathbb{Q}}\left( {{a}_{1},\ldots ,{a}_{n}}\right) = {\left\lbrack \det \left( {\sigma }_{i}\left( {a}_{j}\right) \right) \right\rbrack }^{2} \) .
Exercise 4.2.6 Show that
\[
{d}_{K/\mathbb{Q}}\left( {1, a,\ldots ,{a}^{n - 1}}\right) = \mathop{\prod }\limits_{{i > j}}{\left( {\sigma }_{i}\left( a\right) - {\sigma }_{j}\left( a\right) \right) }^{2}.
\]
We denote \( {d}_{K/\mathbb{Q}}\left( {1, a,\ldots ,{a}^{n - 1}}\right) \) by \( {d}_{K/\mathbb{Q}}\left( a\right) \) .
Exercise 4.2.7 Suppose that \( {u}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}{v}_{j} \) with \( {a}_{ij} \in \mathbb{Q},{v}_{j} \in K \) . Show that \( {d}_{K/\mathbb{Q}}\left( {{u}_{1},{u}_{2},\ldots ,{u}_{n}}\right) = {\left( \det \left( {a}_{ij}\right) \right) }^{2}{d}_{K/\mathbb{Q}}\left( {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right) . \)
For a module \( M \) with submodule \( N \), we can define the index of \( N \) in \( M \) to be the number of elements in \( M/N \), and denote this by \( \left\lbrack {M : N}\right\rbrack \) . Suppose \( \alpha \) is an algebraic integer of degree \( n \), generating a field \( K \) . We define the index of \( \alpha \) to be the index of \( \mathbb{Z} + \mathbb{Z}\alpha + \cdots + \mathbb{Z}{\alpha }^{n - 1} \) in \( {\mathcal{O}}_{K} \) .
Exercise 4.2.8 Let \( {a}_{1},{a}_{2},\ldots ,{a}_{n} \in {\mathcal{O}}_{K} \) be linearly independent over \( \mathbb{Q} \) . Let \( N = \mathbb{Z}{a}_{1} + \mathbb{Z}{a}_{2} + \cdots + \mathbb{Z}{a}_{n} \) and \( m = \left\lbrack {{\mathcal{O}}_{K} : N}\right\rbrack \) . Prove that
\[
{d}_{K/\mathbb{Q}}\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) = {m}^{2}{d}_{K}
\]
## 4.3 Examples
Example 4.3.1 Suppose that the minimal polynomial of \( \alpha \) is Eisensteinian with respect to a prime \( p \), i.e., \( \alpha \) is a root of the polynomial
\[
{x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{1}x + {a}_{0},
\]
where \( p \mid {a}_{i},0 \leq i \leq n - 1 \) and \( {p}^{2} \nmid {a}_{0} \) . Show that the index of \( \alpha \) is not divisible by \( p \) .
Solution. Let \( M = \mathbb{Z} + \mathbb{Z}\alpha + \cdots + \mathbb{Z}{\alpha }^{n - 1} \) . First observe that since
\[
{\alpha }^{n} + {a}_{n - 1}{\alpha }^{n - 1} + \cdots + {a}_{1}\alpha + {a}_{0} = 0,
\]
then \( {\alpha }^{n}/p \in M \subseteq {\mathcal{O}}_{K} \) . Also, \( \left| {{\mathrm{N}}_{K}\left( \alpha \right) }\right| = {a}_{0} ≢ 0\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) .
We will proceed by contradiction. Suppose \( p \mid \left\lbrack {{\mathcal{O}}_{K} : M}\right\rbrack \) . Then there is an element of order \( p \) in the group \( {\mathcal{O}}_{K}/M \), meaning \( \exists \xi \in {\mathcal{O}}_{K} \) such that \( \xi \notin M \) but \( {p\xi } \in M \) . Then
\[
{p\xi } = {b}_{0} + {b}_{1}\alpha + \cdots + {b}_{n - 1}{\alpha }^{n - 1},
\]
where not all the \( {b}_{i} \) are divisible by \( p \), for otherwise \( \xi \in M \) . Let \( j \) be the least index such that \( p \nmid {b}_{j} \) . Then
\[
\eta = \xi - \left( {\frac{{b}_{0}}{p} + \frac{{b}_{1}}{p}\alpha + \cdots + \frac{{b}_{j - 1}}{p}{\alpha }^{j - 1}}\right)
\]
\[
= \frac{{b}_{j}}{p}{\alpha }^{j} + \frac{{b}_{j + 1}}{p}{\alpha }^{j + 1} + \cdots + \frac{{b}_{n}}{p}{\alpha }^{n}
\]
is in \( {\mathcal{O}}_{K} \), since both \( \xi \) and
\[
\frac{{b}_{0}}{p} + \frac{{b}_{1}}{p}\alpha + \cdots + \frac{{b}_{n}}{p}{\alpha }^{j - 1}
\]
are in \( {\mathcal{O}}_{K} \) .
If \( \eta \in {\mathcal{O}}_{K} \), then of course \( \eta {\alpha }^{n - j - 1} \) is also in \( {\mathcal{O}}_{K} \), and
\[
\eta {\alpha }^{n - j - 1} = \frac{{b}_{j}}{p}{\alpha }^{n - 1} + \frac{{\alpha }^{n}}{p}\left( {{b}_{j + 1} + {b}_{j + 2}\alpha + \cdots + {b}_{n}{\alpha }^{n - j - 2}}\right) .
\]
Since both \( {\alpha }^{n}/p \) and \( \left( {{b}_{j + 1} + {b}_{j + 2}\alpha + \cdots + {b}_{n}{\alpha }^{n - j - 2}}\right) \) are in \( {\mathcal{O}}_{K} \), we conclude that \( \left( {{b}_{j}{\alpha }^{n - 1}}\right) /p \in {\mathcal{O}}_{K} \) .
We know from Lemma 4.1.1 that the norm of an algebraic integer is always a rational integer, so
\[
{\mathrm{N}}_{K}\left( {\frac{{b}_{j}}{p}{\alpha }^{n - 1}}\right) = \frac{{b}_{j}^{n}{\mathrm{\;N}}_{K}{\left( \alpha \right) }^{n - 1}}{{p}^{n}}
\]
\[
= \frac{{b}_{j}^{n}{a}_{0}^{n - 1}}{{p}^{n}}
\]
must be an integer. But \( p \) does not divide \( {b}_{j} \), and \( {p}^{2} \) does not divide \( {a}_{0} \), so this is impossible. This proves that we do not have an element of order \( p \) , and thus \( p \nmid \left\lbrack {{\mathcal{O}}_{K} : M}\right\rbrack \) .
Exercise 4.3.2 Let \( m \in \mathbb{Z},\alpha \in {\mathcal{O}}_{K} \) . Prove that \( {d}_{K/\mathbb{Q}}\left( {\alpha + m}\right) = {d}_{K/\mathbb{Q}}\left( \alpha \right) \) .
Exercise 4.3.3 Let \( \alpha \) be an algebraic integer, and let \( f\left( x\right) \) be the minimal polynomial of \( \alpha \) . If \( f \) has degree \( n \), show that \( {d}_{K/\mathbb{Q}}\left( \alpha \right) = {\left( -1\right) }^{\left( \begin{matrix} n \\ 2 \end{matrix}\right) }\mathop{\prod }\limits_{{i = 1}}^{n}{f}^{\prime }\left( {\alpha }^{\left( i\right) }\right) \) .
Example 4.3.4 Let \( K = \mathbb{Q}\left( \sqrt{D}\right) \) with \( D \) a squarefree integer. Find an integral basis for \( {\mathcal{O}}_{K} \) .
Solution. An arbitrary element \( \alpha \) of \( K \) is of the form \( \alpha = {r}_{1} + {r}_{2}\sqrt{D} \) with \( {r}_{1},{r}_{2} \in \mathbb{Q} \) . Since \( \left\lbrack {K : \widehat{\mathbb{Q}}}\right\rbrack = 2,\alpha \) has only one conjugate: \( {r}_{1} - {r}_{2}\sqrt{D} \) . From Lemma 4.1.1 we know that if \( \alpha \) is an algebraic integer, then \( {\operatorname{Tr}}_{K}\left( \alpha \right) = 2{r}_{1} \) and
\[
{\mathrm{N}}_{K}\left( \alpha \right) = \left( {{r}_{1} + {r}_{2}\sqrt{D}}\right) \left( {{r}_{1} - {r}_{2}\sqrt{D}}\right)
\]
\[
= {r}_{1}^{2} - D{r}_{2}^{2}
\]
are both integers. We note also that since \( \alpha \) satisfies the monic polynomial \( {x}^{2} - 2{r}_{1}x + {r}_{1}^{2} - D{r}_{2}^{2} \), if \( {\operatorname{Tr}}_{K}\left( \alpha \right) \) and \( {\mathrm{N}}_{K}\left( \alpha \right) \) are integers, then \( \alpha \) is an algebraic integer. If \( 2{r}_{1} \in \mathbb{Z} \) where \( {r}_{1} \in \mathbb{Q} \), then the denominator of \( {r}_{1} \) can be at most 2 . We also need \( {r}_{1}^{2} - D{r}_{2}^{2} \) to be an integer, so the denominator of \( {r}_{2} \) can be no more than 2 . Then let \( {r}_{1} = {g}_{1}/2,{r}_{2} = {g}_{2}/2 \), where \( {g}_{1},{g}_{2} \in \mathbb{Z} \) . The second condition amounts to
\[
\frac{{g}_{1}^{2} - D{g}_{2}^{2}}{4} \in \mathbb{Z}
\]
which means that \( {g}_{1}^{2} - D{g}_{2}^{2} \equiv 0\left( {\;\operatorname{mod}\;4}\right) \), or \( {g}_{1}^{2} \equiv D{g}_{2}^{2}\left( {\;\operatorname{mod}\;4}\right) \) .
We will discuss two cases:
Case 1. \( D \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) .
If \( D \equiv 1\left( {\;\operatorname{mod}\;4}\right) \), and \( {g}_{1}^{2} \equiv D{g}_{2}^{2}\left( {\;\operatorname{mod}\;4}\right) \), then \( {g}_{1} \) and \( {g}_{2} \) are either both even or both odd. So if \( \alpha = {r}_{1} + {r}_{2}\sqrt{D} \) is an algebraic integer of \( \mathbb{Q}\left( \sqrt{D}\right) \), then either \( {r}_{1} \) and \( {r}_{2} \) are both integers, or they are both fractions with denominator 2.
We recall from Chapter 3 that if \( 4 \mid \left( {-D + 1}\right) \), then \( \left( {1 + \sqrt{D}}\right) /2 \) is an algebraic integer. This suggests that we use \( 1,\left( {1 + \sqrt{D}}\right) /2 \) as a basis; it is clear from the discussion above that this is in fact an integral basis.
Case 2. \( D \equiv 2,3\left( {\;\operatorname{mod}\;4}\right) \) .
If \( {g}_{1}^{2} \equiv D{g}_{2}^{2}\left( {\;\operatorname{mod}\;4}\right) \), then both \( {g}_{1} \) and \( {g}_{2} \) must be even. Then a basis for \( {\mathcal{O}}_{K} \) is \( 1,\sqrt{D} \) ; again it is clear that this is an integral basis.
Exercise 4.3.5 If \( D \equiv 1\left( {\;\operatorname{mod}\;4}\right) \), show that every integer of \( \mathbb{Q}\left( \sqrt{D}\right) \) can be written as \( \left( {a + b\sqrt{D}}\right) /2 \) where \( a \equiv b\left( {\;\operatorname{mod}\;2}\right) \) .
Example 4.3.6 Let \( K = \mathbb{Q}\left( \alpha \right) \) where \( \alpha = {r}^{1/3}, r = a{b}^{2} \in \mathbb{Z} \) where \( {ab} \) is squarefree. If \( 3 \mid r \), assume that \( 3 \mid a,3 \nmid b \) . Find an integral basis for \( K \) .
Solution. The minimal polynomial of \( \alpha \) is \( f\left( x\right) = {x}^{3} - r \), and \( \alpha \) ’s conjugates are \( \alpha ,{\omega \alpha } \), and \( {\omega }^{2}\alpha \) where \( \omega \) is a primitive cube root of unity. By Exercise 4.3.3,
\[
{d}_{K/\mathbb{Q}}\left( \alpha \right) = - \mathop{\prod }\limits_{{i = 1}}^{3}{f}^{\prime }\left( {\alpha }^{\left( i\right) }\right) = - {3}^{3}{r}^{2}.
\]
So \( - {3}^{3}{r}^{2} = {m}^{2}{d}_{K} \) where \( m = \left\lbrack {{\mathcal{O}}_{K} : \mathbb{Z} + \mathbb{Z}\alpha + \mathbb{Z}{\alpha }^{2}}\right\rbrack \) . We note that \( f\left( x\right) \) is Eisensteinian for every prime divisor of \( a \) so by Example 4.3.1 if \( p \mid a \) , \( p \nmid m \) . Thus if \( 3 \mid a,{27}{a}^{2} \mid {d}_{K} \), and if \( 3 \nmid a \), then \( 3{a}^{2} \mid {d}_{K} \) .
We now consider \( \beta = {\a
|
Exercise 4.2.5 Show that the discriminant is well-defined. In other words, show that given \( {\omega }_{1},{\omega }_{2},\ldots ,{\omega }_{n} \) and \( {\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{n} \), two integral bases for \( K \), we get the same discriminant for \( K \).
|
null
|
Lemma 3.2. \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) \) exist in \( \mathfrak{A} \) and depends only on \( x \) and \( \Phi \), not on the choice of \( \left\{ {f}_{n}\right\} \) .
We need
*EXERCISE 3.2. Let \( x \in \mathfrak{A} \), let \( \Omega \) be an open set containing \( \sigma \left( x\right) \), and let \( f \) be a rational functional holomorphic in \( \Omega \) .
Choose an open set \( {\Omega }_{1} \) with
\[
\sigma \left( x\right) \subset {\Omega }_{1} \subset {\bar{\Omega }}_{1} \subset \Omega
\]
whose boundary \( \gamma \) is the union of finitely many simple closed polygonal curves. Then
(4)
\[
f\left( x\right) = \frac{1}{2\pi i}{\int }_{\gamma }f\left( t\right) \cdot {\left( t - x\right) }^{-1}{dt}.
\]
Proof of Lemma 3.2. Choose \( \gamma \) as in Exercise 3.2. Then
\[
\begin{Vmatrix}{{f}_{n}\left( x\right) - \frac{1}{2\pi i}{\int }_{\gamma }\frac{\Phi \left( t\right) {dt}}{t - z}}\end{Vmatrix} = \begin{Vmatrix}{\frac{1}{2\pi i}{\int }_{\gamma }\frac{{f}_{n}\left( t\right) - \Phi \left( t\right) }{t - x}{dt}}\end{Vmatrix}
\]
\[
\leq \frac{1}{2\pi }{\int }_{\gamma }\left| {{f}_{n}\left( t\right) - \Phi \left( t\right) }\right| \begin{Vmatrix}{\left( t - x\right) }^{-1}\end{Vmatrix}{dx}
\]
\( \rightarrow 0 \) as \( n \rightarrow \infty \), since \( \begin{Vmatrix}{\left( t - x\right) }^{-1}\end{Vmatrix} \) is bounded on \( \gamma \) while \( {f}_{n} \rightarrow \Phi \) uniformly on \( \gamma \) . Thus
(5)
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) = \frac{1}{2\pi i}{\int }_{\gamma }\frac{\Phi \left( t\right) {dt}}{t - x}.
\]
Now let \( \left\{ {F}_{n}\right\} \) be a sequence in \( H\left( \Omega \right) \) . We write
\[
{F}_{n} \rightarrow F\text{ in }H\left( \Omega \right)
\]
if \( {F}_{n} \) tends to \( F \) uniformly on compact sets in \( \Omega \) .
Theorem 3.3. Let \( \mathfrak{A} \) be a Banach algebra, \( x \in \mathfrak{A} \), and let \( \Omega \) be an open set containing \( \sigma \left( x\right) \) . Then there exists a map \( \tau : H\left( \Omega \right) \rightarrow \mathfrak{A} \) such that the following holds. We write \( F\left( x\right) \) for \( \tau \left( F\right) \) :
(a) \( \tau \) is an algebraic homomorphism.
(b) If \( {F}_{n} \rightarrow F \) in \( H\left( \Omega \right) \), then \( {F}_{n}\left( x\right) \rightarrow F\left( x\right) \) in \( \mathfrak{A} \) .
(c) \( \overset{⏜}{F\left( x\right) } = F\left( \widehat{x}\right) \) for all \( F \in H\left( \Omega \right) \) .
(d) If \( F \) is the identity function, \( F\left( x\right) = x \) .
(e) With \( \gamma \) as earlier, if \( F \in H\left( \Omega \right) \) ,
\[
F\left( x\right) = \frac{1}{2\pi i}{\int }_{\gamma }\frac{F\left( t\right) {dt}}{t - x}.
\]
Properties (a),(b), and (d) define \( \tau \) uniquely.
Note. Theorem 3.1 is contained in this result.
Proof. Fix \( F \in H\left( \Omega \right) \) . Choose a sequence of rational functions \( \left\{ {f}_{n}\right\} \in H\left( \Omega \right) \) with \( {f}_{n} \rightarrow F \) in \( H\left( \Omega \right) \) . By Lemma 3.2
(6)
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right)
\]
exists in \( \mathfrak{A} \) . We define this limit to be \( F\left( x\right) \) and \( \tau \) to be the map \( F \rightarrow F\left( x\right) \) .
\( \tau \) is evidently a homomorphism when restricted to rational functions. Equation (6) then yields (a). Similarly, (c) holds for rational functions and so by (6) in general. Part (d) follows from (6).
Part (e) coincides with (5). Part (b) comes from (e) by direct computation.
Suppose now that \( {\tau }^{\prime } \) is a map from \( H\left( \Omega \right) \) to \( \mathfrak{A} \) satisfying (a),(b), and (d).
By (a) and (d), \( {\tau }^{\prime } \) and \( \tau \) agree on rational functions. By (b), then \( {\tau }^{\prime } = \tau \) on \( H\left( \Omega \right) \) .
We now consider some consequences of Theorem 3.3 as well as some related questions.
Let \( \mathfrak{A} \) be a Banach algebra. By a nontrivial idempotent \( e \) in \( \mathfrak{A} \) we mean an element \( e \) with \( {e}^{2} = e, e \) not the zero element or the identity. Suppose that \( e \) is such an element. Then \( 1 - e \) is another. \( e \) is not in the radical (why?), so \( \widehat{e} ≢ 0 \) on \( \mathcal{M} \) . Similarly, \( 1 - e ≢ 0 \), so \( \widehat{e} ≢ 1 \) . But \( {\widehat{e}}^{2} = \widehat{e} \), so \( \widehat{e} \) takes on only the values 0 and 1 on \( \mathcal{M} \) . It follows that \( \mathcal{M} \) is disconnected.
Question. Does the converse hold? That is, if \( \mathcal{M} \) is disconnected, must \( \mathfrak{A} \) contain a nontrivial idempotent?
At this moment, we can prove only a weaker result.
Corollary. Assume there is an element \( x \) in \( \mathfrak{A} \) such that \( \sigma \left( x\right) \) is disconnected. Then \( \mathfrak{A} \) contains a nontrivial idempotent.
Proof. \( \sigma \left( x\right) = {K}_{1} \cup {K}_{2} \), where \( {K}_{1},{K}_{2} \) are disjoint closed sets. Choose disjoint open sets \( {\Omega }_{1} \) and \( {\Omega }_{2} \) ,
\[
{K}_{1} \subset {\Omega }_{1},\;{K}_{2} \subset {\Omega }_{2}.
\]
Put \( \Omega = {\Omega }_{1} \cup {\Omega }_{2} \) . Define \( F \) on \( \Omega \) by
\[
F = 1\text{ on }{\Omega }_{1},\;F = 0\text{ on }{\Omega }_{2}.
\]
Then \( F \in H\left( \Omega \right) \) . Put
\[
e = F\left( x\right) .
\]
By Theorem 3.3,
\[
{e}^{2} = {F}^{2}\left( x\right) = F\left( x\right) = e
\]
and
\[
\widehat{e} = F\left( \widehat{x}\right) = \left\{ \begin{array}{ll} 1 & \text{ on }{\widehat{x}}^{-1}\left( {K}_{1}\right) , \\ 0 & \text{ on }{\widehat{x}}^{-1}\left( {K}_{2}\right) . \end{array}\right.
\]
Hence \( e \) is a nontrivial idempotent.
EXERCISE 3.3. Let \( B \) be a Banach space and \( T \) a bounded linear operator on \( B \) having disconnected spectrum. Then, there exists a bounded linear operator \( E \) on \( B, E \neq 0, E \neq I \), such that \( {E}^{2} = E \) and \( E \) commutes with \( T \) .
EXERCISE 3.4. Let \( \mathfrak{A} \) be a Banach algebra. Assume that \( \mathcal{M} \) is a finite set. Then there exist idempotents \( {e}_{1},{e}_{2},\ldots ,{e}_{n} \in \mathfrak{A} \) with \( {e}_{i} \cdot {e}_{j} = 0 \) if \( i \neq j \) and with \( \mathop{\sum }\limits_{{i = 1}}^{n}{e}_{i} = 1 \) such that the following holds:
Every \( x \) in \( \mathfrak{A} \) admits a representation
\[
x = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{e}_{i} + \rho
\]
where the \( {\lambda }_{i} \) are scalars and \( \rho \) is in the radical.
Note. Exercise 3.4 contains the following classical fact: If \( \alpha \) is an \( n \times n \) matrix with complex entries, then there exist commuting matrices \( \beta \) and \( \gamma \) with \( \beta \) nilpotent, \( \gamma \) diagonalizable, and
\[
\alpha = \beta + \gamma
\]
To see this, put \( \mathfrak{A} = \) algebra of all polynomials in \( \alpha \), normed so as to be a Banach algebra, and apply the exercise.
We consider another problem. Given a Banach algebra \( \mathfrak{A} \) and an invertible element \( x \in \mathfrak{A} \), when can we find \( y \in \mathfrak{A} \) so that
\[
x = {e}^{y}?
\]
There is a purely topological necessary condition: There must exist \( f \) in \( C\left( \mathcal{M}\right) \) so that
\[
\widehat{x} = {e}^{\prime }\text{ on }\mathcal{M}.
\]
(Think of an example where this condition is not satisfied.)
We can give a sufficient condition:
Corollary. Assume that \( \sigma \left( x\right) \) is contained in a simply connected region \( \Omega \), where \( 0 \notin \Omega \) . Then there is a \( \gamma \) in \( \mathfrak{A} \) with \( x = {e}^{y} \) .
Proof. Let \( \Phi \) be a single-valued branch of \( \log z \) defined in \( \Omega \) . Put \( y = \Phi \left( x\right) \) .
\[
\mathop{\sum }\limits_{0}^{N}\frac{{\Phi }^{n}}{n!} \rightarrow {e}^{\Phi } = z\text{ in }H\left( \Omega \right) ,\;\text{ as }N \rightarrow \infty .
\]
Hence by Theorem 3.3(b),
\[
\left( {\mathop{\sum }\limits_{0}^{N}\frac{{\Phi }^{n}}{n!}}\right) \left( x\right) \rightarrow x
\]
By (a) the left side equals
\[
\mathop{\sum }\limits_{0}^{N}\frac{{\left( \Phi \left( x\right) \right) }^{n}}{n!} \rightarrow {e}^{y}
\]
Hence \( {e}^{y} = x \) .
To find complete answers to the questions about existence of idempotents and representation of elements as exponentials, we need some more machinery.
We shall develop this machinery, concerning differential forms and the \( \bar{\partial } \) - operator, in the next three sections. We shall then use the machinery to set up an operational calculus in several variables for Banach algebras, to answer the above questions, and to attack various other problems.
## NOTES
Theorem 3.3 has a long history. See E. Hille and R. S. Phillips, Functional analysis and semi-groups, Am. Math. Soc. Coll. Publ. XXXI, 1957, Chap. V. In the form given here, it is part of Gelfand's theory [Ge]. For the result on idempotents and related results, see Hille and Phillips, loc. cit. 4
## Differential Forms
Note. The proofs of all lemmas in this section are left as exercises.
The notion of differential form is defined for arbitrary differentiable manifolds. For our purposes, it will suffice to study differential forms on an open subset \( \Omega \) of real Euclidean \( N \) -space \( {\mathbb{R}}^{N} \) . Fix such an \( \Omega \) . Denote by \( {x}_{1},\ldots ,{x}_{N} \) the coordinates in \( {\mathbb{R}}^{N} \) .
Definition 4.1. \( {C}^{\infty }\left( \Omega \right) = \) algebra of all infinitely differentiable complex-valued functions on \( \Omega \) .
We write \( {C}^{\infty } \) for \( {C}^{\infty }\left( \Omega \right) \) .
Definition 4.2. Fix \( x \in \Omega .{T}_{x} \) is the collection of all maps \( v : {C}^{\infty } \rightarrow \mathbb{C} \) for which
(a) \( v \) is linear.
(b) \( v\left( {f \cdot g}\right) = f\left( x\right) \cdot v\left( g\right) + g\left( x\righ
|
Lemma 3.2. \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) \) exist in \( \mathfrak{A} \) and depends only on \( x \) and \( \Phi \), not on the choice of \( \left\{ {f}_{n}\right\} \) .
|
Proof of Lemma 3.2. Choose \( \gamma \) as in Exercise 3.2. Then
\[
\begin{Vmatrix}{{f}_{n}\left( x\right) - \frac{1}{2\pi i}{\int }_{\gamma }\frac{\Phi \left( t\right) {dt}}{t - z}}\end{Vmatrix} = \begin{Vmatrix}{\frac{1}{2\pi i}{\int }_{\gamma }\frac{{f}_{n}\left( t\right) - \Phi \left( t\right) }{t - x}{dt}}\end{Vmatrix}
\]
\[
\leq \frac{1}{2\pi }{\int }_{\gamma }\left| {{f}_{n}\left( t\right) - \Phi \left( t\right) }\right| \begin{Vmatrix}{\left( t - x\right) }^{-1}\end{Vmatrix}{dx}
\]
\( \rightarrow 0 \) as \( n \rightarrow \infty \), since \( \begin{Vmatrix}{\left( t - x\right) }^{-1}\end{Vmatrix} \) is bounded on \( \gamma \) while \( {f}_{n} \rightarrow \Phi \) uniformly on \( \gamma \) . Thus
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) = \frac{1}{2\pi i}{\int }_{\gamma }\frac{\Phi \left( t\right) {dt}}{t - x}.
\]
|
Exercise 4.4.7. Let \( X \) be a compact metric space, and assume that \( {\nu }_{n} \rightarrow \mu \) in the weak*-topology on \( \mathcal{M}\left( X\right) \) . Show that for a Borel set \( B \) with \( \mu \left( {\partial B}\right) = 0 \) ,
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{\nu }_{n}\left( B\right) = \mu \left( B\right)
\]
## Notes to Chap. 4
\( {}^{\left( {48}\right) } \) (Page 98) The fact that \( {\mathcal{M}}^{T}\left( X\right) \) is non-empty may also be seen as a result of various fixed-point theorems that generalize the Brouwer fixed point theorem to an infinite-dimensional setting; the argument used in Sect. 4.1 is attractive because it is elementary and is connected directly to the dynamics.
(49) (Page 103) A convenient source for the Choquet representation theorem is the updated lecture notes by Phelps [283]; the original papers are those of Choquet \( \left\lbrack {{55},{56}}\right\rbrack \) .
(50) (Page 103) Notice that the space of invariant measures for a given continuous map is a topological attribute rather than a measurable one: measurably isomorphic systems may have entirely unrelated spaces of invariant measures. In particular, the Jewett-Krieger theorem shows that any ergodic measure-preserving system \( \left( {X,\mathcal{B},\mu, T}\right) \) on a Lebesgue space is measurably isomorphic to a minimal, uniquely ergodic homeomorphism on a compact metric space (a continuous map on a compact metric space is called minimal if every point has a dense orbit; see Exercise 4.2.1). This deep result was found by Jewett [166] for weakly-mixing transformations, and was extended to ergodic systems by Krieger [213] using his proof of the existence of generators [212]. Thus having a model (up to measurable isomorphism) as a uniquely ergodic map on a compact metric space carries no information about a given measurable dynamical system. Among the many extensions and modifications of this important result, Bellow and Furstenberg [22], Hansel and Raoult [140] and Denker [69] gave different proofs; Jakobs [164] and Denker and Eberlein [70] extended the result to flows; Lind and Thouvenot [231] showed that any finite entropy ergodic transformation is isomorphic to a homeomorphism of the torus \( {\mathbb{T}}^{2} \) preserving Lebesgue measure; Lehrer [222] showed that the homeomorphism can always be chosen to be topologically mixing (a homeomorphism \( S : Y \rightarrow Y \) of a compact metric space is topologically mixing if for any open sets \( U, V \subseteq Y \), there is an \( N = N\left( {U, V}\right) \) with \( U \cap {S}^{n}V \neq \varnothing \) for \( n \geq N \) ); Weiss [379] extended to certain group actions and to diagrams of measure-preserving systems; Rosenthal [317] removed the assumption of invertibility. In a different direction, Downarowicz [74] has shown that every possible Choquet simplex arises as the space of invariant measures of a map even in a highly restricted class of continuous maps.
(51) (Page 104) Birkhoff's recurrence theorem may be thought of as a topological analog of Poincaré recurrence (Theorem 2.11), with the essential hypothesis of finite measure replaced by compactness. Furstenberg and Weiss [109] showed that there is also a topological analog of the ergodic multiple recurrence theorem (Theorem 7.4): if \( \left( {X, T}\right) \) is minimal and \( U \subseteq X \) is open and non-empty, then for any \( k > 1 \) there is some \( n \geq 1 \) with
\[
U \cap {T}^{n}U \cap \cdots \cap {T}^{\left( {k - 1}\right) n}U \neq \varnothing .
\]
(52) (Page 110) This characterization is due to Pjateckiĭ-Šapiro [285], who showed it as a property characterizing normality for orbits under the map \( x \mapsto {ax}\left( {\;\operatorname{mod}\;1}\right) \) .
(53) (Page 110) The theory of equidistribution from the viewpoint of number theory is a large and sophisticated one. Extensive overviews of this theory in three different decades may be found in the monographs of Kuipers and Niederreiter [215], Hlawka [154], and Drmota and Tichy [75].
(54) (Page 111) The formulation in (2) is the Weyl criterion for equidistribution; it appears in his paper [381]. Weyl really established the principle that equidistribution can be shown using a sufficiently rich set of test functions; in particular on a compact group it is sufficient to use an appropriate orthonormal basis of \( {L}^{2} \) . Thus a more general formulation of the Weyl criterion is as follows. Let \( G \) be a compact metrizable group and let \( {G}^{\sharp } \) denote the set of conjugacy classes in \( G \) . Then a sequence \( \left( {g}_{n}\right) \) of elements of \( {G}^{\sharp } \) is equidistributed with respect to Haar measure if and only if
\[
\mathop{\sum }\limits_{{j = 1}}^{n}\operatorname{tr}\left( {\pi \left( {g}_{j}\right) }\right) = \mathrm{o}\left( n\right)
\]
as \( n \rightarrow \infty \), for any non-trivial irreducible unitary representation \( \pi : G \rightarrow {\mathrm{{GL}}}_{k}\left( \mathbb{C}\right) \) . For more about equidistribution in the number-theoretic context, see the monograph of Iwaniec and Kowalski \( \left\lbrack {{162},\text{ Ch. }{21}}\right\rbrack \) .
(55) (Page 112) This equidistribution result was proved independently by several people, including Weyl [380], Bohl [39] and Sierpiński [344].
## Chapter 5 Conditional Measures and Algebras
In this chapter we provide some more background in measure theory, which will be used frequently in the rest of the book. One of the most fundamental notions of averaging (in the sense of probability rather than ergodic theory) is afforded by the notion of conditional expectation. Recall that in probability the possible events are the measurable sets \( A, B, C,\ldots \) in a measure space \( \left( {X,\mathcal{B},\mu }\right) \) with \( \mu \left( X\right) = 1 \) . The probability of the event \( A \) is \( \mu \left( A\right) \), and the conditional probability of \( A \) given that an event \( B \) with \( \mu \left( B\right) > 0 \) has occurred, \( \mu \left( {A \mid B}\right) \), is given by \( \frac{\mu \left( {A \cap B}\right) }{\mu \left( B\right) } \) . It is useful to extend this notion to sub- \( \sigma \) -algebras of \( \mathcal{B} \) . This turns out to provide a flexible tool for dealing with probabilities (measures) conditioned on events (measurable sets) that are allowed to be very unlikely. In fact, with some care, we will allow conditioning on events corresponding to null sets.
## 5.1 Conditional Expectation
Theorem 5.1. Let \( \left( {X,\mathcal{B},\mu }\right) \) be a probability space, and let \( \mathcal{A} \subseteq \mathcal{B} \) be a sub-σ-algebra. Then there is a map
\[
E\left( {\cdot \mid \mathcal{A}}\right) : {L}^{1}\left( {X,\mathcal{B},\mu }\right) \rightarrow {L}^{1}\left( {X,\mathcal{A},\mu }\right)
\]
called the conditional expectation, that satisfies the following properties.
(1) For \( f \in {L}^{1}\left( {X,\mathcal{B},\mu }\right) \), the image function \( E\left( {f \mid \mathcal{A}}\right) \) is characterized almost everywhere by the two properties
- \( E\left( {f \mid \mathcal{A}}\right) \) is \( \mathcal{A} \) -measurable;
- for any \( A \in \mathcal{A},{\int }_{A}E\left( {f \mid \mathcal{A}}\right) \mathrm{d}\mu = {\int }_{A}f\mathrm{\;d}\mu \) .
(2) \( E\left( {\cdot \mid \mathcal{A}}\right) \) is a linear operator of norm 1. Moreover, \( E\left( {\cdot \mid \mathcal{A}}\right) \) is positive (that is, \( E\left( {f \mid \mathcal{A}}\right) \geq 0 \) almost everywhere whenever \( f \in {L}^{1}\left( {X,\mathcal{B},\mu }\right) \) has \( f \geq 0 \) ).
(3) For \( f \in {L}^{1}\left( {X,\mathcal{B},\mu }\right) \) and \( g \in {L}^{\infty }\left( {X,\mathcal{A},\mu }\right) \) ,
\[
E\left( {{gf} \mid \mathcal{A}}\right) = {gE}\left( {f \mid \mathcal{A}}\right)
\]
almost everywhere.
(4) If \( {\mathcal{A}}^{\prime } \subseteq \mathcal{A} \) is a sub- \( \sigma \) -algebra, then
\[
E\left( {E\left( {f \mid \mathcal{A}}\right) \mid {\mathcal{A}}^{\prime }}\right) = E\left( {f \mid {\mathcal{A}}^{\prime }}\right)
\]
almost everywhere.
(5) If \( f \in {L}^{1}\left( {X,\mathcal{A},\mu }\right) \) then \( E\left( {f \mid \mathcal{A}}\right) = f \) almost everywhere.
(6) For any \( f \in {L}^{1}\left( {X,\mathcal{B},\mu }\right) ,\left| {E\left( {f \mid \mathcal{A}}\right) }\right| \leq E\left( {\left| f\right| \mid \mathcal{A}}\right) \) almost everywhere.
For a collection of sets \( \left\{ {{A}_{\gamma } \mid \gamma \in \Gamma }\right\} \), denote by \( \sigma \left( \left\{ {{A}_{\gamma } \mid \gamma \in \Gamma }\right\} \right) \) the \( \sigma \) - algebra generated by the collection (that is, the smallest \( \sigma \) -algebra containing all the sets \( {A}_{\gamma } \) ). A partition \( \xi \) of a measure space \( X \) is a finite or countable set of disjoint measurable sets whose union is \( X \) .
Example 5.2. If \( \mathcal{A} = \sigma \left( \xi \right) \) is the finite \( \sigma \) -algebra generated by a finite partition \( \xi = \left\{ {{A}_{1},\ldots ,{A}_{n}}\right\} \) of \( X \), then
\[
E\left( {f \mid \mathcal{A}}\right) \left( x\right) = \frac{1}{\mu \left( {A}_{i}\right) }{\int }_{{A}_{i}}f\mathrm{\;d}\mu
\]
if \( x \in {A}_{i} \) . The \( \sigma \) -algebra being conditioned on is illustrated in Fig. 5.1 for a partition into \( n = 8 \) sets; \( E\left( {f \mid \mathcal{A}}\right) \) is then a function constant on each element of the partition \( \xi \) .

Fig. 5.1 A partition of \( X \) into 8 sets
Example 5.3. Let \( X = {\left\lbrack 0,1\right\rbrack }^{2} \) with two-dimensional Lebesgue measure, and let \( \mathcal{A} = \mathcal{B} \times \{ \varnothing ,\left\lbrack {0,1}\right\rbrack \} \) be the \( \sigma \) -algebra comprising sets of the form \( B \times \left\lbrack {0,1}\right\rbrack \)
for \( B \) a measurable subset of \( \left
|
Exercise 4.4.7. Let \( X \) be a compact metric space, and assume that \( {\nu }_{n} \rightarrow \mu \) in the weak*-topology on \( \mathcal{M}\left( X\right) \) . Show that for a Borel set \( B \) with \( \mu \left( {\partial B}\right) = 0 \) ,
\[
\mathop{\lim }\limits_{{n \rightarrow \infty }}{\nu }_{n}\left( B\right) = \mu \left( B\right)
\]
|
null
|
Theorem 14 The list-chromatic index of a bipartite graph equals its chromatic index.
Proof. Let \( G \) be a bipartite graph with bipartition \( \left( {{V}_{1},{V}_{2}}\right) \), and let \( \lambda : E\left( G\right) \rightarrow \) \( \left\lbrack k\right\rbrack \) be an edge-colouring of \( G \), where \( k \) is the chromatic index of \( G \) . Define preferences on \( G \) as follows: let \( a \in {V}_{1} \) prefer a neighbour \( A \) to a neighbour \( B \) iff \( \lambda \left( {aA}\right) > \lambda \left( {aB}\right) \), and let \( A \in {V}_{2} \) prefer a neighbour \( a \) to a neighbour \( b \) iff \( \lambda \left( {aA}\right) < \lambda \left( {bA}\right) \) . Note that the total function defined by this assignment of preferences is at most \( k - 1 \) on every edge, since if \( \lambda \left( {aA}\right) = j \) then \( a \) prefers at most \( k - j \) of its neighbours to \( A \), and \( A \) prefers at most \( j - 1 \) of its neighbours to \( a \) . Hence, by Theorem \( {11}, G \) is \( k \) -choosable.
As we noted in Section 2, the chromatic index of a bipartite graph equals its maximal degree, so Theorem 14 can be restated as
\[
{\chi }_{\ell }^{\prime }\left( G\right) = {\chi }^{\prime }\left( G\right) = \Delta \left( G\right)
\]
for every bipartite graph \( G \) .
It is easily seen that the result above holds for bipartite multigraphs as well (see Exercise 52); indeed, all one has to recall is that every bipartite multigraph contains a stable matching.
We know that, in general, \( {\chi }_{\ell }\left( G\right) \neq \chi \left( G\right) \) even for planar graphs, although we do have equality for the line graphs of bipartite graphs. Recall that the line graph of a graph \( G = \left( {V, E}\right) \) is \( L\left( G\right) = \left( {E, F}\right) \), where \( F = \{ {ef} : e, f \in E, e \) and \( f \) are adjacent \( \} \) . Indeed, it is conjectured that we have equality for all line graphs, in other words, \( {\chi }_{\ell }^{\prime }\left( G\right) = {\chi }^{\prime }\left( G\right) \) for all graphs. Trivially,
\[
{\chi }_{\ell }^{\prime }\left( G\right) = {\chi }_{\ell }\left( {L\left( G\right) }\right) \leq \Delta (\left( {L\left( G\right) }\right) + 1 \leq {2\Delta }\left( G\right) - 1,
\]
but it is not even easy to see that
\[
{\chi }_{\ell }^{\prime }\left( G\right) \leq \left( {2 - {10}^{-{10}}}\right) \Delta \left( G\right)
\]
if \( \Delta \left( G\right) \) is large enough. In fact, in 1996 Kahn proved that if \( \varepsilon > 0 \) and \( \Delta \left( G\right) \) is large enough then
\[
{\chi }_{\ell }^{\prime }\left( G\right) \leq \left( {1 + \varepsilon }\right) \Delta \left( G\right)
\]
Even after these beautiful results of Galvin and Kahn, we seem to be far from a proof of the full conjecture that \( {\chi }_{\ell }^{\prime }\left( G\right) = {\chi }^{\prime }\left( G\right) \) for every graph.
## V. 5 Perfect Graphs
In the introduction to this chapter we remarked that perhaps the simplest reason why the chromatic number of a graph \( G \) is at least \( k \) is that \( G \) contains a \( k \) -clique, a complete graph of order \( k \) . The observation gave us the trivial inequality (1), namely that \( \chi \left( G\right) \) is at least as large as the clique number \( \omega \left( G\right) \), the maximal order of a complete subgraph of \( G \) .
The chromatic number \( \chi \left( G\right) \) can be considerably larger than \( \omega \left( G\right) \) ; in fact, we shall see in Chapter VII that, for all \( k \) and \( g \), there is a graph of chromatic number at least \( k \) and girth at least \( g \) . However, here we shall be concerned with graphs at the other end of the spectrum: with graphs all whose induced subgraphs have their chromatic number equal to their clique number. These are the so-called perfect graphs. Thus a graph \( G \) is perfect if \( \chi \left( H\right) = \omega \left( H\right) \) for every induced subgraph \( H \) of \( G \), including \( G \) itself. Clearly, bipartite graphs are perfect, but a triangle-free graph containing an odd cycle is not perfect since its clique number is 2 and its chromatic number is at least 3 . It is less immediate that the complement of a bipartite graph is also perfect. This is perhaps the first result on perfect graphs, proved by Gallai and König in 1932, although the concept of a perfect graph was only explicitly defined by Berge in 1960. Recall that the complement of a graph \( G = \left( {V, E}\right) \) is \( \bar{G} = \left( {V,{V}^{\left( 2\right) } - E}\right) \) . Although \( \omega \left( \bar{G}\right) \) is \( \alpha \left( G\right) \), the independence number of \( G \), in order to have fewer functions, we shall use \( \omega \left( \bar{G}\right) \) rather than \( \alpha \left( G\right) \) .
## Theorem 15 The complement of a bipartite graph is perfect.
Proof. Since an induced subgraph of the complement of a bipartite graph is also the complement of a bipartite graph, all we have to prove is that if \( G = \left( {V, E}\right) \) is a bipartite graph then \( \chi \left( \bar{G}\right) = \omega \left( \bar{G}\right) \) .
Now, in a colouring of \( \bar{G} \), every colour class is either a vertex or a pair of vertices adjacent in \( G \) . Thus \( \chi \left( \bar{G}\right) \) is the minimal number of vertices and edges of \( G \), covering all vertices of \( G \) . By Corollary III.10, this is precisely the maximal number of independent vertices in \( G \), that is, the clique number \( \omega \left( \bar{G}\right) \) of \( \bar{G} \) .
For our next examples of perfect graphs, we shall take line graphs and their complements.
Theorem 16 Let \( G \) be a bipartite graph with line graph \( H = L\left( G\right) \) . Then \( H \) and \( \overline{H} \) are perfect.
Proof. Once again, all we have to prove is that \( \chi \left( H\right) = \omega \left( H\right) \) and \( \chi \left( \bar{H}\right) = \omega \left( \bar{H}\right) \) . Clearly, \( \omega \left( H\right) = \Delta \left( G\right) \) and \( \chi \left( H\right) = {\chi }^{\prime }\left( G\right) \) . But as \( G \) is bipartite, \( {\chi }^{\prime }\left( G\right) = \) \( \Delta \left( G\right) \) (see the beginning of Section 2), so \( \chi \left( H\right) = \Delta \left( G\right) = \omega \left( H\right) \) .
And what is \( \chi \left( \bar{H}\right) \) ? The minimal number of vertices of \( G \) covering all the edges. Finally, what is \( \omega \left( \bar{H}\right) \) ? The maximal number of independent edges of \( G \) . By Corollary III. 10, these two quantities are equal.
Yet another class of perfect graphs can be obtained from partially ordered sets. Given a partially ordered set \( P = \left( {X, < }\right) \), its comparability graph is \( C\left( P\right) = \) \( \left( {X, E}\right) \), where \( E = \left\{ {{xy} \in {X}^{\left( 2\right) } : x < y\text{or}y < x}\right\} \) .
Theorem 17 Comparability graphs and their complements are perfect.
Proof. Once again, it suffices to show that if \( P \) is a partially ordered set then for \( H = C\left( P\right) \) we have \( \chi \left( H\right) = \omega \left( H\right) \) and \( \chi \left( \bar{H}\right) = \omega \left( \bar{H}\right) \) .
To see the first equality, for \( x \in P \) let \( r\left( x\right) \), the rank of \( x \), be the maximal integer \( r \) for which \( P \) contains a chain of \( r \) elements, with maximal element \( x \) . Then for \( k = \mathop{\max }\limits_{r}r\left( x\right) \) the map \( r : P \rightarrow \left\lbrack k\right\rbrack \) gives a \( k \) -colouring of \( H \), and a chain of size \( k \) gives a \( k \) -clique.
The second equality is deeper. Indeed, \( \chi \left( \bar{H}\right) \) is the minimal number of chains into which \( P \) can be partitioned, and \( \omega \left( \bar{H}\right) \) is precisely the maximal number of elements in an antichain. Therefore the equality \( \chi \left( \bar{H}\right) = \omega \left( \bar{H}\right) \) is none other than Dilworth's theorem, Theorem III.12.
It does not take much to notice that, in all the examples above, the complement of a perfect graph is also perfect. In fact, the cornerstone of the theory of perfect graphs, the perfect graph theorem, claims that this holds without exception, not only for the examples above. This fundamental result was proved by Lovász and Fulkerson in the early 1970s; although the proof below is relatively simple, it needs a little preparation.
Lemma 18 A necessary and sufficient condition for a graph \( G \) to be perfect is that for every induced subgraph \( H \subset G \) there is an independent set of vertices, \( I \) , such that
\[
\omega \left( {H - I}\right) < \omega \left( H\right)
\]
That is, a graph is perfect iff every induced subgraph \( H \) has an independent set meeting every clique of \( H \) of maximal order \( \omega \left( H\right) \) .
Proof. The necessity holds with plenty to spare. Indeed, let \( H \) be a graph with \( k = \chi \left( H\right) = \omega \left( H\right) \), and let \( I \) be a colour class of a \( k \) -colouring of \( H \) . Then \( \omega \left( {H - I}\right) \leq \chi \left( {H - I}\right) = \chi \left( H\right) - 1 < \omega \left( H\right) . \)
The sufficiency of the condition will be proved by induction on \( \omega \left( G\right) \) . For \( \omega \left( G\right) = 1 \) there is nothing to prove, so suppose that \( \omega \left( G\right) > 1 \) and the assertion holds for smaller values of the clique number. Let \( H \) be an induced subgraph of \( G \) and \( I \) an independent set with \( \omega \left( {H - I}\right) < \omega \left( H\right) \) . By the induction hypothesis, we can colour \( H - I \) with \( \omega \left( {H - I}\right) \) colours; colouring the vertices of \( I \) with a new colour, we obtain a colouring of \( H \) with \( \omega \left( {H - I}\right) + 1 \leq \omega \left( H\right) \) colours. Thus \( \chi \left( H\right) \leq \omega \left( H\right) \), and we are done.
Th
|
Theorem 14 The list-chromatic index of a bipartite graph equals its chromatic index.
|
Let \( G \) be a bipartite graph with bipartition \( \left( {{V}_{1},{V}_{2}}\right) \), and let \( \lambda : E\left( G\right) \rightarrow \) \( \left\lbrack k\right\rbrack \) be an edge-colouring of \( G \), where \( k \) is the chromatic index of \( G \) . Define preferences on \( G \) as follows: let \( a \in {V}_{1} \) prefer a neighbour \( A \) to a neighbour \( B \) iff \( \lambda \left( {aA}\right) > \lambda \left( {aB}\right) \), and let \( A \in {V}_{2} \) prefer a neighbour \( a \) to a neighbour \( b \) iff \( \lambda \left( {aA}\right) < \lambda \left( {bA}\right) \) . Note that the total function defined by this assignment of preferences is at most \( k - 1 \) on every edge, since if \( \lambda \left( {aA}\right) = j \) then \( a \) prefers at most \( k - j \) of its neighbours to \( A \), and \( A \) prefers at most \( j - 1 \) of its neighbours to \( a \) . Hence, by Theorem \( {11}, G \) is \( k \) -choosable.
|
Theorem 11 (Dual to Theorem 8). In order that a G-module A be cohomo-logically trivial, it is necessary and sufficient that there be an exact sequence \( 0 \rightarrow \mathrm{A} \rightarrow {\mathrm{I}}_{0} \rightarrow {\mathrm{I}}_{1} \rightarrow 0 \), where the \( {\mathrm{I}}_{i} \) are injective \( \mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) -modules.
As before, there is an exact sequence
\[
0 \rightarrow \mathrm{A} \rightarrow {\mathrm{I}}_{0} \rightarrow \mathrm{R} \rightarrow 0
\]
with \( {\mathrm{I}}_{0}\mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) -injective. Since \( \mathrm{A} \) is cohomologically trivial, so is \( \mathrm{R} \) ; on the other hand, \( {\mathrm{I}}_{0} \) is \( \mathbf{Z} \) -injective (by lemma 7), hence \( \mathrm{R} \) is too. Theorem 10 then guarantees that \( \mathrm{R} \) is \( \mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) -injective.
Note. The results of the three preceding sections are essentially due to Nakayama ([47], [48]). For the presentation, I have followed the paper [51] of Dock Sang Rim, who has greatly simplified the proofs of Nakayama and generalised some of his results. See also Lang [93] and Tate [118].
## §7. A Comparison Theorem
Theorem 12. Let \( \mathrm{G} \) be a finite group, \( \mathrm{A} \) and \( {\mathrm{A}}^{\prime }\mathrm{G} \) -modules, and \( f : {\mathrm{A}}^{\prime } \rightarrow \mathrm{A} \) a G-homomorphism. For each prime number \( p \), let \( {\mathrm{G}}_{p} \) be a Sylow p-subgroup of \( \mathrm{G} \), and suppose there is an integer \( {n}_{p} \) such that the homomorphism
\[
{f}_{i}^{ * } : {\widehat{\mathrm{H}}}^{i}\left( {{\mathrm{G}}_{p},{\mathrm{\;A}}^{\prime }}\right) \rightarrow {\widehat{\mathrm{H}}}^{i}\left( {{\mathrm{G}}_{p},\mathrm{\;A}}\right)
\]
is surjective for \( i = {n}_{p} \), bijective for \( i = {n}_{p} + 1 \), injective for \( i = {n}_{p} + 2 \) .
If \( \mathrm{B} \) is a \( \mathrm{G} \) -module such that \( \operatorname{Tor}\left( {\mathrm{A},\mathrm{B}}\right) = 0 = \operatorname{Tor}\left( {{\mathrm{A}}^{\prime },\mathrm{B}}\right) \), then the homomorphism
\[
{\widehat{\mathrm{H}}}^{i}\left( {g,{\mathrm{\;A}}^{\prime } \otimes \mathrm{B}}\right) \rightarrow {\widehat{\mathrm{H}}}^{i}\left( {g,\mathrm{\;A} \otimes \mathrm{B}}\right)
\]
is bijective for every subgroup \( g \) of \( \mathrm{G} \) and every integer \( i \) . In particular, \( {\widehat{\mathrm{H}}}^{i}\left( {g,{\mathrm{\;A}}^{\prime }}\right) \rightarrow {\widehat{\mathrm{H}}}^{i}\left( {g,\mathrm{\;A}}\right) \) is bijective for all \( i \) .
We will use a construction analogous to the "mapping-cylinder" in topology. Let \( {\overline{\mathrm{A}}}^{\prime } \) be the induced module canonically defined by \( {\mathrm{A}}^{\prime }, i : {\mathrm{A}}^{\prime } \rightarrow {\overline{\mathrm{A}}}^{\prime } \) the canonical injection (cf. Chap. VII,§6). Put \( {\mathrm{A}}^{ * } = \mathrm{A} \oplus {\overline{\mathrm{A}}}^{\prime } \) . The pair \( \left( {f, i}\right) \) defines an injection \( \theta : {\mathrm{A}}^{\prime } \rightarrow {\mathrm{A}}^{ * } \) ; if \( {\mathrm{A}}^{\prime \prime } \) denotes the cokernel of \( \theta \), we have the exact sequence
\[
0 \rightarrow {\mathrm{A}}^{\prime } \rightarrow {\mathrm{A}}^{ * } \rightarrow {\mathrm{A}}^{\prime \prime } \rightarrow 0.
\]
As \( {\overline{\mathrm{A}}}^{\prime } \) is cohomologically trivial, the cohomology of \( {\mathrm{A}}^{ * } \) can be identified with that of A. The hypothesis on the \( {f}_{i}^{ * } \), together with the exact cohomology sequence, gives
\[
{\widehat{\mathrm{H}}}^{q}\left( {{\mathrm{G}}_{p},{\mathrm{\;A}}^{\prime \prime }}\right) = 0\;\text{ for }q = {n}_{p},{n}_{p} + 1.
\]
By theorem 8, \( {\mathrm{A}}^{\prime \prime } \) is cohomologically trivial. On the other hand, \( {\mathrm{A}}^{\prime } \) is a direct factor in \( {\overline{\mathrm{A}}}^{\prime } \) (as \( \mathbf{Z} \) -module, of course), hence also in \( {\mathrm{A}}^{ * } \) ; as \( {\mathrm{A}}^{ * } \) is the direct sum
of \( A \) and a number of copies of \( {A}^{\prime } \), the hypothesis on \( B \) implies \( \operatorname{Tor}\left( {{A}^{ * }, B}\right) = 0 \) , whence \( \operatorname{Tor}\left( {{\mathrm{A}}^{\prime \prime },\mathrm{B}}\right) = 0 \), and theorem 9 tells us that \( {\mathrm{A}}^{\prime \prime } \otimes \mathrm{B} \) is cohomologically trivial. The exact sequence
\[
0 \rightarrow {\mathrm{A}}^{\prime } \otimes \mathrm{B} \rightarrow {\mathrm{A}}^{ * } \otimes \mathrm{B} \rightarrow {\mathrm{A}}^{\prime \prime } \otimes \mathrm{B} \rightarrow 0
\]
enables us to deduce the bijectivity of \( {\widehat{\mathrm{H}}}^{q}\left( {g,{\mathrm{\;A}}^{\prime } \otimes \mathrm{B}}\right) \rightarrow {\widehat{\mathrm{H}}}^{q}\left( {g,{\mathrm{\;A}}^{ * } \otimes \mathrm{B}}\right) \) . As the same holds for \( {\widehat{\mathrm{H}}}^{q}\left( {g,{\mathrm{\;A}}^{ * } \otimes \mathrm{B}}\right) \rightarrow {\widehat{\mathrm{H}}}^{q}\left( {g,\mathrm{\;A} \otimes \mathrm{B}}\right) \), the theorem is proved.
Remark. Suppose A and \( {\mathrm{A}}^{\prime } \) are \( \mathbf{Z} \) -free. The G-modules \( {\overline{\mathrm{A}}}^{\prime } \) and \( {\mathrm{A}}^{\prime \prime } \) are then projective (the first is even free). In other words, \( f \) factors into
\[
{\mathrm{A}}^{\prime }\overset{i}{ \rightarrow }{\mathrm{\;A}}^{\prime } \oplus {\mathrm{P}}^{\prime }\overset{\mathrm{F}}{ \rightarrow }\mathrm{A} \oplus \mathrm{P}\overset{\pi }{ \rightarrow }\mathrm{A}
\]
with \( \mathbf{P} \) and \( {\mathbf{P}}^{\prime } \) projective, \( \mathbf{F} \) an isomorphism, and \( i \) (resp. \( \pi \) ) denoting the obvious injection (resp. projection). When A and \( {\mathrm{A}}^{\prime } \) are finitely generated, \( \mathrm{P} \) and \( {\mathrm{P}}^{\prime } \) can be taken to be finitely generated; in the terminology of Eckmann-Hilton ([23], and see also [58]), \( f \) is a homotopy equivalence.
## EXERCISE
With the notation and hypotheses of the above remark, show that the element \( \left( f\right) = \) \( {\mathrm{P}}^{\prime } - \mathrm{P} \) of the group \( \mathrm{P}\left( \mathrm{G}\right) \) depends only on \( f \), not on the choice of \( \mathrm{P} \) and \( {\mathrm{P}}^{\prime } \) . Show that \( \left( {fg}\right) = \left( f\right) + \left( g\right) \), and that \( \left( f\right) = 0 \) if and only if \( \mathrm{P} \) and \( {\mathrm{P}}^{\prime } \) can be chosen free of finite rank over \( \mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) .
## §8. The Theorem of Tate and Nakayama
Theorem 13. Let \( \mathrm{G} \) be a finite group, \( \mathrm{A},\mathrm{B},\mathrm{C} \) three \( \mathrm{G} \) -modules, and let \( \varphi : \mathrm{A} \times \mathrm{B} \rightarrow \mathrm{C} \) be a G-invariant bilinear map. Let \( q \in \mathbf{Z}, a \in {\widehat{H}}^{q}\left( {\mathrm{G},\mathrm{A}}\right) \) . Given any subgroup \( g \) of \( \mathrm{G} \) and any \( \mathrm{G} \) -module \( \mathrm{D} \), denote by
\[
f\left( {n, g,\mathrm{D}}\right) : {\widehat{\mathrm{H}}}^{n}\left( {g,\mathrm{\;B} \otimes \mathrm{D}}\right) \rightarrow {\widehat{\mathrm{H}}}^{n + q}\left( {g,\mathrm{C} \otimes \mathrm{D}}\right)
\]
the homomorphism defined by cup product with the class \( {a}_{g} = {\operatorname{Res}}_{G/g}\left( a\right) \) (relative to the obvious bilinear map of \( \mathrm{A} \times \left( {\mathrm{B} \otimes \mathrm{D}}\right) \) into \( \mathrm{C} \otimes \mathrm{D} \) ).
Suppose that for every prime \( p \) and Sylow p-subgroup \( {\mathrm{G}}_{p} \) of \( \mathrm{G} \), there is an integer \( {n}_{p} \) for which \( f\left( {n,{\mathbf{G}}_{p},\mathbf{Z}}\right) \) is surjective for \( n = {n}_{p} \), bijective for \( n = {n}_{p} + 1 \) , and injective for \( n = {n}_{p} + 2 \) .
Then \( f\left( {n, g,\mathbf{D}}\right) \) is bijective for all \( n \), all \( g \), and every G-module \( \mathbf{D} \) such that
\[
\operatorname{Tor}\left( {\mathrm{B},\mathrm{D}}\right) = \operatorname{Tor}\left( {\mathrm{C},\mathrm{D}}\right) = 0.
\]
We first treat the case \( q = 0 \) . The class \( a \in {\widehat{\mathrm{H}}}^{0}\left( {\mathrm{G},\mathrm{A}}\right) \) can be represented by an element \( a \in {\mathrm{A}}^{G} \) . Putting \( f\left( b\right) = \varphi \left( {a, b}\right) \), we obtain a homomorphism of G-modules
\[
f : \mathbf{B} \rightarrow \mathbf{C}\text{.}
\]
It is easy to check that the homomorphism
\[
f\left( {n, g,\mathrm{D}}\right) : {\widehat{\mathrm{H}}}^{n}\left( {g,\mathrm{\;B} \otimes \mathrm{D}}\right) \rightarrow {\widehat{\mathrm{H}}}^{n}\left( {g,\mathrm{C} \otimes \mathrm{D}}\right)
\]
is merely the homomorphism induced by \( f \otimes 1 : \mathrm{B} \otimes \mathrm{D} \rightarrow \mathrm{C} \otimes \mathrm{D} \), and we are reduced to theorem 12.
The general case is handled by dimension-shifting. Let us show how to pass from \( q - 1 \) to \( q \) : embed \( \mathrm{A} \) in its canonical induced module \( \overline{\mathrm{A}} \), and set \( {\mathrm{A}}_{1} = \overline{\mathrm{A}}/\mathrm{A} \) . Define similarly \( {\mathrm{C}}_{1} = \overline{\mathrm{C}}/\mathrm{C} \) and \( {\varphi }_{1} : {\mathrm{A}}_{1} \times \mathrm{B} \rightarrow {\mathrm{C}}_{1} \) . The class \( a \in {\widehat{\mathrm{H}}}^{q}\left( {\mathrm{G},\mathrm{A}}\right) \) can be written \( a = \delta \left( {a}_{1}\right) ,{a}_{1} \in {\widehat{\mathrm{H}}}^{q - 1}\left( {\mathrm{G},{\mathrm{A}}_{1}}\right) \) . This class \( {a}_{1} \) defines, by cup product, homomorphisms
\[
{f}_{1}\left( {n, g,\mathrm{D}}\right) : {\widehat{\mathrm{H}}}^{n}\left( {g,\mathrm{\;B} \otimes \mathrm{D}}\right) \rightarrow {\widehat{\mathrm{H}}}^{n + q - 1}\left( {g,{\mathrm{C}}_{1} \otimes \mathrm{D}}\right) .
\]
Combining \( {f}_{1} \) with the isomorphism
\[
\delta : {\widehat{\mathrm{H}}}^{n + q - 1}\left( {g,{\mathrm{C}}_{1} \otimes \mathrm{D}}\right) \rightarrow {\widehat{\mathrm{H}}}^{n + q}\left( {g,\mathrm{C} \otimes \mathrm{D}}
|
Theorem 11 (Dual to Theorem 8). In order that a G-module A be cohomologically trivial, it is necessary and sufficient that there be an exact sequence \( 0 \rightarrow \mathrm{A} \rightarrow {\mathrm{I}}_{0} \rightarrow {\mathrm{I}}_{1} \rightarrow 0 \), where the \( {\mathrm{I}}_{i} \) are injective \( \mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) -modules.
|
As before, there is an exact sequence
\[
0 \rightarrow \mathrm{A} \rightarrow {\mathrm{I}}_{0} \rightarrow \mathrm{R} \rightarrow 0
\]
with \( {\mathrm{I}}_{0}\mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) -injective. Since \( \mathrm{A} \) is cohomologically trivial, so is \( \mathrm{R} \) ; on the other hand, \( {\mathrm{I}}_{0} \) is \( \mathbf{Z} \) -injective (by lemma 7), hence \( \mathrm{R} \) is too. Theorem 10 then guarantees that \( \mathrm{R} \) is \( \mathbf{Z}\left\lbrack \mathrm{G}\right\rbrack \) -injective.
|
Corollary 12.3.17. Every non-orientable path connected CW n-manifold has an orientable path connected double cover.
## Exercises
1. Give an example of a 2-pseudomanifold which is not a manifold, and of a 2- pseudomanifold whose boundary is not a pseudomanifold.
2. Prove that an orientable pseudomanifold is \( R \) -orientable for any commutative ring \( R \) .
3. Prove that the cone on a path connected finite pseudomanifold is a pseudoman-ifold.
4. Give a counterexample to the converse of 12.3.11.
5. Give an example of a non-pseudomanifold covering space of a pseudomanifold.
6. Show that if \( K \) is a combinatorial manifold triangulating the surface \( {T}_{g, d} \) then \( K \) is orientable, and if \( K \) triangulates the surface \( {U}_{h, d} \) then \( K \) is non-orientable.
7. Give a counterexample to 12.3.7 when 2 is a 0-divisor.
8. In a pseudomanifold \( X \), let \( c = \mathop{\sum }\limits_{\alpha }{u}_{\alpha }{e}_{\alpha }^{n} \) where each \( {u}_{\alpha } = \pm 1 \) . Then \( \partial c = {2d} + e \) where \( d \) is supported outside \( \partial X \) and \( e \) is supported in \( \partial X \) . Show that \( d \in \) \( {Z}_{n - 1}^{\infty }\left( {X,\partial X;\mathbb{Z}}\right) \) and that \( \{ d\} = 0 \) or has order 2 in \( {H}_{n - 1}\left( {X,\partial X;\mathbb{Z}}\right) \) depending on whether or not \( X \) is orientable. Show that if \( X \) is non-orientable and path connected then the torsion subgroup of \( {H}_{n - 1}^{\infty }\left( {X,\partial X;\mathbb{Z}}\right) \) has order 2 . Describe explicitly a cycle whose homology class is non-zero.
## 12.4 Review of more homological algebra
For details of the algebra reviewed here see, for example, [83].
Let \( R \) be a (not necessarily commutative) \( {\mathrm{{ring}}}^{12} \) with \( 1 \neq 0 \) . The tensor product \( B{ \otimes }_{R}A \) of a right \( R \) -module \( B \) and a left \( R \) -module \( A \) has the structure of an abelian group; it is generated by elements of the form \( b \otimes a \), where \( b \in B \) and \( a \in A \), subject to bilinearity and relations of the form \( {br} \otimes a = b \otimes {ra} \) where \( r \in R \) . If \( R \) is commutative, the left action of \( R \) on \( B{ \otimes }_{R}A \) defined by \( r\left( {b \otimes a}\right) = {br} \otimes a \) makes \( B{ \otimes }_{R}A \) into an \( R \) -module, and \( B{ \otimes }_{R}A \) is understood to carry this left \( R \) -module structure. If \( R \) is not commutative, \( B{ \otimes }_{R}A \) is understood to be an abelian group only, unless an \( R \) -action is specified. \( {}^{13} \)
If \( \left( {\left\{ {C}_{n}\right\} ,\partial }\right) \) is an \( R \) -chain complex and \( B \) is a right \( R \) -module, then \( \left( {\left\{ {B{ \otimes }_{R}{C}_{n}}\right\} ,\mathrm{{id}} \otimes \partial }\right) \) is a \( \mathbb{Z} \) -chain complex whose homology groups are denoted by \( {H}_{ * }\left( {C;B}\right) \) and are called the homology groups of \( C \) with coefficients in \( B \) . Of course, if \( R \) is commutative we get an \( R \) -chain complex and homology \( R \) -modules.
Dually, if \( C \) and \( A \) are left \( R \) -modules, then \( {\operatorname{Hom}}_{R}\left( {C, A}\right) \) has the structure of an abelian group. If \( R \) is commutative, the left action of \( R \) on \( {\operatorname{Hom}}_{R}\left( {C, A}\right) \) defined by \( \left( {r.f}\right) \left( c\right) = r.f\left( c\right) \) makes \( {\operatorname{Hom}}_{R}\left( {C, A}\right) \) into an \( R \) -module (since \( r.f \in \) \( \left. {{\operatorname{Hom}}_{R}\left( {C, A}\right) }\right) \) . If \( R \) is not commutative, \( {\operatorname{Hom}}_{R}\left( {C, A}\right) \) is understood to be an abelian group only, unless an \( R \) -action is specified.
If \( \left( {\left\{ {C}_{n}\right\} ,\partial }\right) \) is an \( R \) -chain complex and \( A \) is a left \( R \) -module, then \( \left( {\left\{ {{\operatorname{Hom}}_{R}\left( {{C}_{n}, A}\right) }\right\} ,{\partial }^{ * }}\right) \) is a \( \mathbb{Z} \) -cochain complex whose cohomology groups are denoted by \( {H}^{ * }\left( {C;A}\right) \) ; they are the cohomology groups of \( C \) with coefficients in \( A \) . Again, if \( R \) is commutative we get an \( R \) -cochain complex and cohomology \( R \) -modules.
---
\( {}^{12} \) For this section only we suspend our standing convention (Sect. 2.1) that \( R \) denotes a commutative ring. In this section the status of \( R \) will change several times.
\( {}^{13} \) For group rings \( {RG} \) we elaborate on this convention in Sect. 8.1.
---
FROM HERE UNTIL AFTER REMARK 12.4.6, \( R \) IS UNDERSTOOD TO BE A PID. In particular, \( R \) is a commutative ring without zero divisors, and every submodule of a free \( R \) -module is free. When \( R \) is commutative, the distinction between left and right \( R \) -modules is not worth making, since a left module \( M \) becomes a right module under the \( R \) -action \( m.r = {rm} \), and vice versa.
For any \( R \) -module \( A \), there exist short exact sequences \( 0 \rightarrow {F}_{1}\overset{i}{ \rightarrow }{F}_{0}\overset{j}{ \rightarrow }A \rightarrow 0 \) in which \( {F}_{0} \) and \( {F}_{1} \) are free \( R \) -modules. For any \( R \) -module \( B \), let \( {\operatorname{Tor}}_{R}\left( {B, A}\right) \) be defined to make an exact sequence
\[
0 \rightarrow {\operatorname{Tor}}_{R}\left( {B, A}\right) \rightarrow B{ \otimes }_{R}{F}_{1}\xrightarrow[]{\text{ id } \otimes i}B{ \otimes }_{R}{F}_{0}\xrightarrow[]{\text{ id } \otimes j}B{ \otimes }_{R}A \rightarrow 0.
\]
Up to canonical isomorphism, \( {\operatorname{Tor}}_{R}\left( {B, A}\right) \) is independent of the choice of \( {F}_{0} \) and of the epimorphism \( {F}_{0} \rightarrow A \) . Both \( \cdot { \otimes }_{R} \cdot \) and \( {\operatorname{Tor}}_{R}\left( {\cdot , \cdot }\right) \) are covariant functors of two variables, which commute with direct sums and direct limits. A useful fact is:
Proposition 12.4.1. If \( A \) or \( B \) is torsion free, then \( {\operatorname{Tor}}_{R}\left( {B, A}\right) = 0 \) .
Theorem 12.4.2. (Universal Coefficient Theorem in homology) With \( \left( {\left\{ {C}_{n}\right\} ,\partial }\right) \) and \( B \) as above, and each \( {C}_{n} \) free, there is a natural short exact sequence of \( R \) -modules
\[
0 \rightarrow B{ \otimes }_{R}{H}_{n}\left( C\right) \overset{\beta }{ \rightarrow }{H}_{n}\left( {C;B}\right) \rightarrow {\operatorname{Tor}}_{R}\left( {B,{H}_{n - 1}\left( C\right) }\right) \rightarrow 0
\]
This sequence splits, naturally in \( B \) but unnaturally in \( C \) .
Dually, if \( B \) and \( A \) are \( R \) -modules, let \( {\operatorname{Ext}}_{R}\left( {A, B}\right) \) be defined to make an exact sequence
\[
0 \rightarrow {\operatorname{Hom}}_{R}\left( {A, B}\right) \overset{{j}^{ * }}{ \rightarrow }{\operatorname{Hom}}_{R}\left( {{F}_{0}, B}\right) \overset{{i}^{ * }}{ \rightarrow }{\operatorname{Hom}}_{R}\left( {{F}_{1}, B}\right)
\]
\[
\rightarrow {\operatorname{Ext}}_{R}\left( {A, B}\right) \rightarrow 0\text{.}
\]
As with \( {\operatorname{Tor}}_{R},{\operatorname{Ext}}_{R}\left( {A, B}\right) \) is independent of the choice of \( {F}_{0} \) and of the epimorphism \( {F}_{0} \rightarrow A \) . Both \( {\operatorname{Hom}}_{R}\left( {\cdot , \cdot }\right) \) and \( {\operatorname{Ext}}_{R}\left( {\cdot , \cdot }\right) \) are functors of two variables, contravariant in the first and covariant in the second. They convert direct sums into direct products, so they commute with finite direct sums. Since \( R \) is a domain, \( {\operatorname{Hom}}_{R}\left( {A, R}\right) \) is torsion free for all \( A \) .
A useful fact is:
Proposition 12.4.3. If \( A \) is free then \( {\operatorname{Ext}}_{R}\left( {A, B}\right) = 0 \) .
The next statement is proved in [146, Sect. 5.5.2]:
Proposition 12.4.4. \( {\operatorname{Ext}}_{R}\left( {R/\left( r\right), R}\right) \cong R/\left( r\right) ;{\operatorname{Ext}}_{R}\left( {R/\left( r\right), R/\left( q\right) }\right) \cong R/\left( s\right) \) where \( s = \left( {r, q}\right) \), the greatest common divisor of \( r \) and \( q \) .
Theorem 12.4.5. (Universal Coefficient Theorem in cohomology) With \( \left( {\left\{ {C}_{n}\right\} ,\partial }\right) \) and \( B \) as above, and each \( {C}_{n} \) free, there is a natural short exact sequence of \( R \) -modules
\[
0 \rightarrow {\operatorname{Ext}}_{R}\left( {{H}_{n - 1}\left( C\right), B}\right) \rightarrow \cdots \rightarrow {H}^{n}\left( {C;B}\right) \overset{\alpha }{ \rightarrow }{\operatorname{Hom}}_{R}\left( {{H}_{n}\left( C\right), B}\right) \rightarrow \cdots \rightarrow 0.
\]
This sequence splits, naturally in \( B \) but unnaturally in \( C \) .
Remark 12.4.6. By examining the proofs of 12.4.2 and 12.4.5, one sees that the monomorphism \( \beta \) in 12.4.2 is given by \( \beta \left( {b\otimes \{ z\} }\right) = \{ z \otimes b\} \), where \( \{ \cdot \} \) denotes homology class; and the epimorphism \( \alpha \) in 12.2.5 is given by \( \alpha \left( {\{ f\} }\right) \left( {\{ z\} }\right) = \) \( f\left( z\right) \), where \( f \in {\operatorname{Hom}}_{R}\left( {{C}_{n}, B}\right), z \in {Z}_{n}\left( C\right) \), and \( \{ \cdot \} \) denotes cohomology or homology as appropriate.
Let \( X \) be a CW complex. Let \( R \) be a commutative ring and let \( M \) be an \( R \) -module. We define the homology and cohomology modules of \( X \) with coefficients in \( M \) .
(i) \( {H}_{n}\left( {X;M}\right) \) is the homology of the chain complex
\[
\left( {\left\{ {M{ \otimes }_{R}{C}_{n}\left( {X;R}\right) }\right\} ,\mathrm{{id}} \otimes \partial }\right) ;
\]
(ii) \( {H}^{n}\left( {X;M}\right) \) is the cohomology of the cochain complex
\[
\left( {\left\{ {{\operatorname{Hom}}_{R}\left( {{C}_{n}\left( {X;R}\right), M}\right) }\right\} ,{\partial }^{ * }}\right) .
\]
When \( X \) has locally fin
|
Corollary 12.3.17. Every non-orientable path connected CW n-manifold has an orientable path connected double cover.
|
null
|
Proposition 8.4.2. Ordinary reduction, supersingular reduction, and multiplicative reduction are well defined on equivalence classes of \( \mathfrak{p} \) -minimal Weierstrass equations. If \( E \) and \( {E}^{\prime } \) are equivalent \( \mathfrak{p} \) -minimal Weierstrass equations with good reduction at \( \mathfrak{p} \) then their reductions define isomorphic elliptic curves over \( {\overline{\mathbb{F}}}_{p} \) .
Proof. If \( E \) and \( {E}^{\prime } \) are equivalent then by Exercise 8.1.1(b)
\[
{u}^{12}{\Delta }^{\prime } = \Delta ,\;{u}^{4}{c}_{4}^{\prime } = {c}_{4},
\]
where \( u \) comes from the admissible change of variable taking \( E \) to \( {E}^{\prime } \) . Recall the disjoint union mentioned early in this section,
\[
{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } = {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \cup \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }
\]
Assume \( {\Delta }^{\prime } \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \) . If also \( \Delta \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) then \( {u}^{12} \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) and thus \( {u}^{4} \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) so that \( {c}_{4} \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) . This means that \( E \) has additive reduction, impossible since \( E \) is \( \mathfrak{p} \) -minimal. Thus there is no equivalence between equations of good and multiplicative reduction.
Given a change of variable between two equations of good reduction, we may further assume by Lemma 8.4.1 that \( u \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \), and so the relation \( {u}^{12}{\Delta }^{\prime } = \) \( \Delta \) shows that \( u \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \) . Therefore \( \widetilde{u} \neq 0 \) in \( {\overline{\mathbb{F}}}_{p} \) . Also the other coefficients \( r, s, t \) from the admissible change of variable taking \( E \) to \( {E}^{\prime } \) lie in \( {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \), similarly to Exercise 8.3.3(b, c) (Exercise 8.4.3), so they reduce to \( {\overline{\mathbb{F}}}_{p} \) under (8.24). The two reduced Weierstrass equations differ by the reduced change of variable, giving the last statement of the proposition. Since isomorphic elliptic curves have the same \( p \) -torsion structure, this shows that ordinary and supersingular reduction are preserved under equivalence.
The proposition shows that if \( E \) is an elliptic curve over \( \overline{\mathbb{Q}} \) then its reduction type at \( \mathfrak{p} \) is well defined as the ordinary, supersingular, or multiplicative reduction type of any \( \mathfrak{p} \) -minimal Weierstrass equation for \( E \) . Furthermore, the proposition shows that if the reduction is good then it gives a well defined elliptic curve \( \widetilde{E} \) over \( {\overline{\mathbb{F}}}_{p} \) up to isomorphism over \( {\overline{\mathbb{F}}}_{p} \) .
The results of this section explain some earlier terminology. Any elliptic curve \( E \) over \( \mathbb{Q} \) can be viewed instead as a curve \( {E}_{\overline{\mathbb{Q}}} \) over \( \overline{\mathbb{Q}} \) . Let \( p \in \mathbb{Z} \) be prime and let \( \mathfrak{p} \subset \overline{\mathbb{Z}} \) be a maximal ideal lying over \( p \) . We have shown that ordinary, supersingular, and multiplicative reduction of \( E \) at \( p \) do not change upon reducing \( {E}_{\overline{\mathbb{O}}} \) at \( \mathfrak{p} \) instead, but additive reduction of \( E \) at \( p \) improves to good or multiplicative reduction of \( {E}_{\overline{\mathbb{Q}}} \) at \( \mathfrak{p} \) . This motivates the words "semistable" for good or multiplicative reduction and "unstable" for additive reduction, as given in the previous section working over \( \mathbb{Q} \) . Exercise 8.4.4 gives an example of an elliptic curve over \( \mathbb{Q} \) whose additive reduction at a rational prime \( p \) improves in \( \overline{\mathbb{Q}} \) .
Good reduction is characterized by the \( j \) -invariant of an elliptic curve, something we can check without needing to put the Weierstrass equation into \( \mathfrak{p} \) -minimal form or even into \( \mathfrak{p} \) -integral form.
Proposition 8.4.3. Let \( E \) be an elliptic curve over \( \overline{\mathbb{Q}} \) and let \( \mathfrak{p} \) be a maximal ideal of \( \overline{\mathbb{Z}} \) . Then \( E \) has good reduction at \( \mathfrak{p} \) if and only if \( j\left( E\right) \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) .
Proof. Suppose \( E \) has good reduction at \( \mathfrak{p} \) . We may assume the Weierstrass equation for \( E \) is \( \mathfrak{p} \) -minimal. Thus \( \Delta \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \) and \( {c}_{4} \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \), so that \( j\left( E\right) = \) \( {c}_{4}^{3}/\Delta \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) as desired.
For the other direction, assume the Weierstrass equation of \( E \) is in \( \mathfrak{p} \) - integral Legendre form, and again assume that \( \mathfrak{p} \) does not lie over 2 . Since \( \Delta = {16}{\lambda }^{2}{\left( 1 - \lambda \right) }^{2} \), showing that \( E \) has good reduction at \( \mathfrak{p} \) reduces to showing that \( \lambda \left( {1 - \lambda }\right) \notin \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) . But since also \( {c}_{4} = {16}\left( {1 - \lambda \left( {1 - \lambda }\right) }\right) \), the relation \( j = {c}_{4}^{3}/\Delta \) is
\[
j{\lambda }^{2}{\left( 1 - \lambda \right) }^{2} = {16}^{2}{\left( 1 - \lambda \left( 1 - \lambda \right) \right) }^{3}.
\]
The assumptions are that \( j \in {\mathbb{Z}}_{\left( \mathfrak{p}\right) } \) and \( \mathfrak{p} \) does not lie over 2 . Thus if \( \lambda \left( {1 - \lambda }\right) \) lies in \( \mathfrak{p}{\mathbb{Z}}_{\left( \mathfrak{p}\right) } \) then so does the left side, while the right side does not, contradiction. For the Deuring form proof when \( \mathfrak{p} \) does lie over 2, again see Appendix A of [Sil86].
So far we have reduced a Weierstrass equation, but not the points themselves of the elliptic curve. To reduce the points, we first show more generally
that for any positive integer \( n \) the maximal ideal \( \mathfrak{p} \) determines a reduction map
\[
{}^{ \sim } : {\mathbb{P}}^{n}\left( \overline{\mathbb{Q}}\right) \rightarrow {\mathbb{P}}^{n}\left( {\overline{\mathbb{F}}}_{p}\right)
\]
(8.25)
To see this, take any point
\[
P = \left\lbrack {{x}_{0},\ldots ,{x}_{n}}\right\rbrack \in {\mathbb{P}}^{n}\left( \overline{\mathbb{Q}}\right)
\]
and consider the values \( m \in \mathbb{N} \) such that \( P \) has a representative with all of \( {x}_{0},\ldots ,{x}_{m} \) in \( {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) and at least one of them equal to 1 . Such values of \( m \) exist, e.g., the smallest \( m \) such that \( {x}_{m} \neq 0 \) . And given such an \( m \) that is less than \( n \), if \( {x}_{m + 1} \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) then \( m + 1 \) also works, but otherwise \( 1/{x}_{m + 1} \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) by Lemma 8.4.1 and so multiplying \( P \) through by \( 1/{x}_{m + 1} \) shows that again \( m + 1 \) works. Thus \( P \) has a representation such that all coordinates lie in \( {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) and some \( {x}_{i} \) is 1 . This representation reduces to
\[
\widetilde{P} = \left\lbrack {{\widetilde{x}}_{0},\ldots ,{\widetilde{x}}_{n}}\right\rbrack \in {\mathbb{P}}^{n}\left( {\overline{\mathbb{F}}}_{p}\right) .
\]
The scalar quotient of two such representations of \( P \) must belong to \( {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \) and thus reduce to \( {\overline{\mathbb{F}}}_{p}^{ * } \), so the two representations reduce to the same element of \( {\mathbb{P}}^{n}\left( {\overline{\mathbb{F}}}_{p}\right) \) and the reduction map is well defined. The description of the map shows that an affine point \( P = \left( {{x}_{1},\ldots ,{x}_{n}}\right) = \left\lbrack {1,{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) of \( {\mathbb{P}}^{n}\left( \overline{\mathbb{Q}}\right) \) reduces to an affine point of \( {\mathbb{P}}^{n}\left( {\overline{\mathbb{F}}}_{p}\right) \) if and only if all of its coordinates lie in \( {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) .
In particular, if \( E \) is an elliptic curve over \( \mathbb{Q} \) or over \( \overline{\mathbb{Q}} \) and \( \widetilde{E} \) is its reduction at \( p \) or at \( \mathfrak{p} \) then the points of \( E \), a subset of \( {\mathbb{P}}^{2}\left( \overline{\mathbb{Q}}\right) \), reduce to points of \( \widetilde{E} \)
since
\[
\widetilde{E}\left( {\widetilde{x},\widetilde{y}}\right) = \widetilde{E\left( {x, y}\right) } = \widetilde{{0}_{\overline{\mathbb{Q}}}} = {0}_{{\overline{\mathbb{F}}}_{p}}.
\]
Since the only nonaffine point of any elliptic curve is its zero, the end of the previous paragraph shows that reduction from \( E \) to \( \widetilde{E} \) has for its kernel zero and the affine points whose coordinates are not both \( \mathfrak{p} \) -integral,
\[
\ker \left( \widetilde{}\right) = E - {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{2}
\]
(8.26)
Let \( E \) be an elliptic curve over \( \overline{\mathbb{Q}} \) and let \( N \) be a positive integer. To study the reduction of \( N \) -torsion at a maximal ideal \( \mathfrak{p} \), recall the \( N \) th division polynomial \( {\psi }_{N} \) from Section 7.1, satisfying
\[
\left\lbrack N\right\rbrack \left( P\right) = {0}_{E} \Leftrightarrow {\psi }_{N}\left( P\right) = 0,\;{\psi }_{N} \in \mathbb{Z}\left\lbrack {{g}_{2},{g}_{3}, x, y}\
|
Proposition 8.4.2. Ordinary reduction, supersingular reduction, and multiplicative reduction are well defined on equivalence classes of \( \mathfrak{p} \) -minimal Weierstrass equations. If \( E \) and \( {E}^{\prime } \) are equivalent \( \mathfrak{p} \) -minimal Weierstrass equations with good reduction at \( \mathfrak{p} \) then their reductions define isomorphic elliptic curves over \( {\overline{\mathbb{F}}}_{p} \).
|
If \( E \) and \( {E}^{\prime } \) are equivalent then by Exercise 8.1.1(b)
\[
{u}^{12}{\Delta }^{\prime } = \Delta ,\;{u}^{4}{c}_{4}^{\prime } = {c}_{4},
\]
where \( u \) comes from the admissible change of variable taking \( E \) to \( {E}^{\prime } \). Recall the disjoint union mentioned early in this section,
\[
{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } = {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \cup \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }
\]
Assume \( {\Delta }^{\prime } \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \). If also \( \Delta \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) then \( {u}^{12} \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) and thus \( {u}^{4} \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \) so that \( {c}_{4} \in \mathfrak{p}{\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \). This means that \( E \) has additive reduction, impossible since \( E \) is \( \mathfrak{p} \) -minimal. Thus there is no equivalence between equations of good and multiplicative reduction.
Given a change of variable between two equations of good reduction, we may further assume by Lemma 8.4.1 that \( u \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \), and so the relation \( {u}^{12}{\Delta }^{\prime } = \) \( \Delta \) shows that \( u \in {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) }^{ * } \). Therefore \( \widetilde{u} \neq 0 \) in \( {\overline{\mathbb{F}}}_{p} \). Also the other coefficients \( r, s, t \) from the admissible change of variable taking \( E \) to \( {E}^{\prime } \) lie in \( {\overline{\mathbb{Z}}}_{\left( \mathfrak{p}\right) } \), similarly to Exercise 8.3.3(b, c) (Exercise 8.4.3), so they reduce to \( {\overline{\mathbb{F}}}_{p} \) under (8.24). The two reduced Weierstrass equations differ by the reduced change of variable, giving the last statement of the proposition. Since isomorphic elliptic curves have the same \( p \) -torsion structure, this shows that ordinary and supersingular reduction are preserved under equivalence.
|
Theorem 6.3.1 Let \( X, Y \), and \( Z \) be graphs. If \( f : Z \rightarrow X \) and \( g : Z \rightarrow \) \( Y \), then there is a unique homomorphism \( \phi \) from \( Z \) to \( X \times Y \) such that \( f = {p}_{X} \circ \phi \) and \( g = {p}_{Y} \circ \phi \) .
Proof. Assume that we are given homomorphisms \( f : Z \rightarrow X \) and \( g \) : \( Z \rightarrow Y \) . The map
\[
\phi : z \mapsto \left( {f\left( z\right), g\left( z\right) }\right)
\]
is readily seen to be a homomorphism from \( Z \) to \( X \times Y \) . Clearly, \( {p}_{X} \circ \phi = f \) and \( {p}_{Y} \circ \phi = g \), and furthermore, \( \phi \) is uniquely determined by \( f \) and \( g \) . \( ▱ \)
If \( X \) and \( Y \) are graphs, we use \( \operatorname{Hom}\left( {X, Y}\right) \) to denote the set of all homomorphisms from \( X \) to \( Y \) .
Corollary 6.3.2 For any graphs \( X, Y \), and \( Z \) ,
\[
\left| {\operatorname{Hom}\left( {Z, X \times Y}\right) }\right| = \left| {\operatorname{Hom}\left( {Z, X}\right) }\right| \left| {\operatorname{Hom}\left( {Z, Y}\right) }\right| .
\]
Our last theorem allows us to derive another property of the set of isomorphism classes of cores. Recall that a partially ordered set is a lattice if each pair of elements has a least upper bound and a greatest lower bound.
Lemma 6.3.3 The set of isomorphism classes of cores, partially ordered by " \( \rightarrow \) ", is a lattice.
Proof. We start with the least upper bound. Let \( X \) and \( Y \) be cores. For any core \( Z \), if \( X \rightarrow Z \) and \( Y \rightarrow Z \), then \( X \cup Y \rightarrow Z \) . Hence \( {\left( X \cup Y\right) }^{ \bullet } \) is the least upper bound of \( X \) and \( Y \) .
For the greatest lower bound we note that by the previous theorem, if \( Z \rightarrow X \) and \( Z \rightarrow Y \), then \( Z \rightarrow X \times Y \) . Hence \( {\left( X \times Y\right) }^{ \bullet } \) is the greatest lower bound of \( X \) and \( Y \) .
It is probably a surprise that the greatest lower bound \( {\left( X \times Y\right) }^{ \bullet } \) normally has more vertices than the least upper bound. Life can be surprising.
If \( X \) is a graph, then the vertices \( \left( {x, x}\right) \), where \( x \in V\left( X\right) \), induce a subgraph of \( X \times X \) isomorphic to \( X \) . We call it the diagonal of the product. In general, \( X \times Y \) need not contain a copy of \( X \) ; consider the product \( {K}_{2} \times {K}_{3} \), which is isomorphic to \( {C}_{6} \) and thus contains no copy of \( {K}_{3} \) .
To conclude this section we describe another construction closely related to the product. Suppose that \( X \) and \( Y \) are graphs with homomorphisms \( f \) and \( g \), respectively, to a graph \( F \) . The subdirect product of \( \left( {X, f}\right) \) and \( \left( {Y, g}\right) \) is the subgraph of \( X \times Y \) induced by the set of vertices
\[
\{ \left( {x, y}\right) \in V\left( X\right) \times V\left( Y\right) : f\left( x\right) = g\left( y\right) \} .
\]
(The proof is left as an exercise.) If \( X \) is a connected bipartite graph, then it has exactly two homomorphisms \( {f}_{1} \) and \( {f}_{2} \) to \( {K}_{2} \) . Suppose \( Y \) is connected and \( g \) is a homomorphism from \( Y \) to \( {K}_{2} \) . Then the two subdirect products of \( \left( {X,{f}_{i}}\right) \) with \( \left( {Y, g}\right) \) form the components of \( X \times Y \) . (Yet another exercise.)
## 6.4 The Map Graph
Let \( F \) and \( X \) be graphs. The map graph \( {F}^{X} \) has the set of functions from \( V\left( X\right) \) to \( V\left( F\right) \) as its vertices; two such functions \( f \) and \( g \) are adjacent in \( {F}^{X} \) if and only if whenever \( u \) and \( v \) are adjacent in \( X \), the vertices \( f\left( u\right) \) and \( g\left( v\right) \) are adjacent in \( F \) . A vertex in \( {F}^{X} \) has a loop on it if and only if the corresponding function is a homomorphism. Even if there are no homomorphisms from \( X \) to \( F \), the map graph \( {F}^{X} \) can still be very useful, as we will see.
Now, suppose that \( \psi \) is a homomorphism from \( X \) to \( Y \) . If \( f \) is a function from \( V\left( Y\right) \) to \( V\left( F\right) \), then the composition \( f \circ \psi \) is a function from \( V\left( X\right) \) to \( V\left( F\right) \) . Hence \( \psi \) determines a map from the vertices of \( {F}^{Y} \) to \( {F}^{X} \), which we call the adjoint map to \( \psi \) .
Theorem 6.4.1 If \( F \) is a graph and \( \psi \) is a homomorphism from \( X \) to \( Y \) , then the adjoint of \( \psi \) is a homomorphism from \( {F}^{Y} \) to \( {F}^{X} \) .
Proof. Suppose that \( f \) and \( g \) are adjacent vertices of \( {F}^{Y} \) and that \( {x}_{1} \) and \( {x}_{2} \) are adjacent vertices in \( X \) . Then \( \psi \left( {x}_{1}\right) \sim \psi \left( {x}_{2}\right) \), and therefore \( f\left( {\psi \left( {x}_{1}\right) }\right) \sim g\left( {\psi \left( {x}_{2}\right) }\right. \) . Hence \( f \circ \psi \) and \( g \circ \psi \) are adjacent in \( {F}^{X} \) .
Theorem 6.4.2 For any graphs \( F, X \), and \( Y \), we have \( {F}^{X \times Y} \cong {\left( {F}^{X}\right) }^{Y} \) .
Proof. It is immediate that \( {F}^{X \times Y} \) and \( {\left( {F}^{X}\right) }^{Y} \) have the same number of vertices. We start by defining the natural bijection between these sets, and then we will show that it is an isomorphism.
Suppose that \( g \) is a map from \( V\left( {X \times Y}\right) \) to \( F \) . For any fixed \( y \in V\left( Y\right) \) the map
\[
{g}_{y} : x \mapsto g\left( {x, y}\right)
\]
is an element of \( {F}^{X} \) . Therefore, the map
\[
{\Phi }_{g} : y \mapsto {g}_{y}
\]
is an element of \( {\left( {F}^{X}\right) }^{Y} \) . The mapping \( g \mapsto {\Phi }_{g} \) is the bijection that we need.
Now, we must show that this bijection is in fact an isomorphism. So let \( f \) and \( g \) be adjacent vertices of \( {F}^{X \times Y} \) . We must show that \( {\Phi }_{f} \) and \( {\Phi }_{g} \) are adjacent vertices of \( {\left( {F}^{X}\right) }^{Y} \) . Let \( {y}_{1} \) and \( {y}_{2} \) be adjacent vertices in \( Y \) . For any two vertices \( {x}_{1} \sim {x}_{2} \) in \( X \) we have
\[
\left( {{x}_{1},{y}_{1}}\right) \sim \left( {{x}_{2},{y}_{2}}\right)
\]
and since \( f \sim g \) ,
\[
f\left( {{x}_{1},{y}_{1}}\right) \sim g\left( {{x}_{2},{y}_{2}}\right)
\]
and so
\[
{\Phi }_{f}\left( {y}_{1}\right) \sim {\Phi }_{g}\left( {y}_{2}\right)
\]
A similar argument shows that if \( f \mathrel{\text{\sim \not{} }} g \), then \( {\Phi }_{f} \mathrel{\text{\sim \not{} }} {\Phi }_{g} \), and hence the result follows.
Corollary 6.4.3 For any graphs \( F, X \), and \( Y \), we have
\[
\left| {\operatorname{Hom}\left( {X \times Y, F}\right) }\right| = \left| {\operatorname{Hom}\left( {Y,{F}^{X}}\right) }\right| .
\]
Proof. We have just seen that \( {F}^{X \times Y} \cong {\left( {F}^{X}\right) }^{Y} \), and so they have the same number of loops, which are precisely the homomorphisms.
Since there is a homomorphism from \( X \times F \) to \( F \), the last result implies that there is a homomorphism from \( F \) into \( {F}^{X} \) . We can be more precise, although we leave the proof as an exercise.
Lemma 6.4.4 If \( X \) has at least one edge, the constant functions from \( V\left( X\right) \) to \( V\left( F\right) \) induce a subgraph of \( {F}^{X} \) isomorphic to \( F \) .
## 6.5 Counting Homomorphisms
By counting homomorphisms we will derive another interesting property of the map graph.
Lemma 6.5.1 Let \( X \) and \( Y \) be fixed graphs. Suppose that for all graphs \( Z \) we have
\[
\left| {\operatorname{Hom}\left( {Z, X}\right) }\right| = \left| {\operatorname{Hom}\left( {Z, Y}\right) }\right|
\]
Then \( X \) and \( Y \) are isomorphic.
Proof. Let \( \operatorname{Inj}\left( {A, B}\right) \) denote the set of injective homomorphisms from a graph \( A \) to a graph \( B \) . We aim to show that for all \( Z \) we have \( \left| {\operatorname{Inj}\left( {Z, X}\right) }\right| = \) \( \left| {\operatorname{Inj}\left( {Z, Y}\right) }\right| \) . By taking \( Z \) equal to \( X \) and then \( Y \), we see that there are injective homomorphisms from \( X \) to \( Y \) and \( Y \) to \( X \) . Since \( X \) and \( Y \) must have the same number of vertices, an injective homomorphism is surjective, and thus \( X \) is isomorphic to \( Y \) .
We prove that \( \left| {\operatorname{Inj}\left( {Z, X}\right) }\right| = \left| {\operatorname{Inj}\left( {Z, Y}\right) }\right| \) by induction on the number of vertices in \( Z \) . It is clearly true if \( Z \) has one vertex, because any homomorphism from a single vertex is injective.
We can partition the homomorphisms from \( Z \) into any graph \( W \) according to the kernel, so we get
\[
\left| {\operatorname{Hom}\left( {Z, W}\right) }\right| = \mathop{\sum }\limits_{\pi }\left| {\operatorname{Inj}\left( {Z/\pi, W}\right) }\right|
\]
where \( \pi \) ranges over all partitions. A homomorphism is an injection if and only if its kernel is the discrete partition, which we shall denote by \( \delta \) . Therefore,
\[
\left| {\operatorname{Inj}\left( {Z, W}\right) }\right| = \left| {\operatorname{Hom}\left( {Z, W}\right) }\right| - \mathop{\sum }\limits_{{\pi \neq \delta }}\left| {\operatorname{Inj}\left( {Z/\pi, W}\right) }\right|
\]
Now, by the induction hypothesis, all the terms on the right hand side of this sum are the same for \( W = X \) and \( W = Y \) . Therefore, we conclude that
\[
\left| {\operatorname{Inj}\left( {Z, X}\right) }\right| = \left| {\operatorname{Inj}\left( {Z, Y}\right) }\right|
\]
and the result follows.
Lemma 6.5.2 For any graphs \( F, X \), and \( Y \) we have
\[
{F}^{X \cup Y} \cong {F}^{X} \times {F}^{Y}
\]
Proof. For any graph \( Z \), we have
\[
\left| {\operatorname{Hom}\left( {Z,{F}^{X \cup Y}}\right) }\right| = \left| {\operator
|
(Theorem 6.3.1 Let \( X, Y \), and \( Z \) be graphs. If \( f : Z \rightarrow X \) and \( g : Z \rightarrow \) \( Y \), then there is a unique homomorphism \( \phi \) from \( Z \) to \( X \times Y \) such that \( f = {p}_{X} \circ \phi \) and \( g = {p}_{Y} \circ \phi \) .)
|
(Proof. Assume that we are given homomorphisms \( f : Z \rightarrow X \) and \( g \) : \( Z \rightarrow Y \) . The map
\[
\phi : z \mapsto \left( {f\left( z\right), g\left( z\right) }\right)
\]
is readily seen to be a homomorphism from \( Z \) to \( X \times Y \) . Clearly, \( {p}_{X} \circ \phi = f \) and \( {p}_{Y} \circ \phi = g \), and furthermore, \( \phi \) is uniquely determined by \( f \) and \( g \) . \( ▱ \))
|
Theorem 20.20 (Roth). If \( A \subseteq \mathbb{N} \) has positive upper density, then \( A \) contains infinitely many arithmetic progressions of length 3.
## Sketch of the Proof of Furstenberg’s Theorem for \( k \geq 4 \)
The above proof of convergence and recurrence for \( k = 3 \) is due to Furstenberg (1977). The case \( k \geq 4 \) is substantially more complicated. There are several ergodic theoretic proofs of Szemerédi's theorem: the original one from Furstenberg (1977) using diagonal measures, one using characteristic factors due to Fursten-berg et al. (1982), one using the construction in the proof of Host and Kra, see Bergelson et al. (2008). There are also several further proofs using tools from other areas such as "higher-order" Fourier analysis, see Gowers (2001), model theory, see Towsner (2010), hypergraphs, see Gowers (2007), Tao (2006), Nagle, Rödl, Schlacht (2006), Schlacht, Skokan (2004), and more, see, e.g., Green, Tao (2010b). Thus the fascinating story of finding alternative proofs of Szemerédi's theorem seems not to be finished yet.
We now sketch very briefly the proof due to Furstenberg, Katznelson, Ornstein (1982), and for details we refer to Furstenberg (1981, Ch. 7), Petersen (1989, Sec. 4.3), Tao (2009, Ch. 2), or Einsiedler and Ward (2011, Ch. 7).
If \( \left( {\mathrm{Y};\psi }\right) \) is the trivial factor, i.e., \( {\sum }_{\mathrm{Y}} = \{ \varnothing, Y\} \), then the corresponding projection is \( {Qf} = {\int }_{\mathrm{Y}}f,{\mathrm{\;L}}^{1}\left( \mathrm{Y}\right) = \mathbb{C}\mathbf{1},{\int }_{\mathrm{Y}}c\mathbf{1} = c \) . By Theorem 17.19 a measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) is weakly mixing if and only if its Kronecker factor is trivial, and if and only if it has no nontrivial compact factors (i.e., factors that are isomorphic to a compact Abelian group rotation).
We say that a measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) has the SZ-property (SZ for Szemerédi) if for every \( k \geq 2 \) and every \( f \in {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) with \( f > 0\left( {20.11}\right) \) holds. Theorem 20.19 thus expresses that each ergodic measure-preserving system has the SZ-property. By Proposition 20.15 we know that weakly mixing systems do have the SZ-property, and so do systems with discrete spectrum by Exercise 3.
To prove Furstenberg's theorem, one needs relativized versions of the notions "weak mixing" and "discrete spectrum." Let \( \left( {\mathrm{X};\varphi }\right) ,\left( {\mathrm{Y};\psi }\right) \) be measure-preserving systems with Koopman operators \( T \) and \( S \), respectively, and suppose \( \left( {\mathrm{Y};\psi }\right) \) is a factor of \( \left( {\mathrm{X};\varphi }\right) \) with the associated Markov projection \( Q \in \mathrm{M}\left( {\mathrm{X};\mathrm{Y}}\right) \) . We call \( \left( {\mathrm{X};\varphi }\right) \) weakly mixing relative to \( \left( {\mathrm{Y};\psi }\right) \), or a relatively weakly mixing extension of \( \left( {\mathrm{Y};\psi }\right) \) if
\[
\mathop{\lim }\limits_{{N \rightarrow \infty }}\frac{1}{N}\mathop{\sum }\limits_{{n = 1}}^{N}{\int }_{\mathrm{Y}}{\left| Q\left( {T}^{n}f \cdot g\right) - {S}^{n}Qf \cdot Qg\right| }^{2} = 0
\]
holds for every \( f, g \in {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) . For the trivial factor \( \left( {\mathrm{Y};\psi }\right) \) this means
\[
\mathop{\lim }\limits_{{N \rightarrow \infty }}\frac{1}{N}\mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle {T}^{n}f, g\right\rangle -\langle f,\mathbf{1}\rangle \cdot \langle \mathbf{1}, g\rangle \right| }^{2} = 0,
\]
i.e., weak mixing by Theorem 9.19 (and Remark 9.20). One can prove the following result:
1) Suppose \( \left( {\mathrm{Y};\psi }\right) \) has the SZ-property and \( \left( {\mathrm{X};\varphi }\right) \) is a relatively weakly mixing extension of \( \left( {\mathrm{Y};\psi }\right) \), then also \( \left( {\mathrm{X};\varphi }\right) \) has the SZ-property.
The proof uses the van der Corput lemma and is almost literally the same as for Proposition 20.15.
Another notion needed is that of compact extensions which can be defined in purely measure theoretic terms. We shall not give the definition here but note that for the proof of Furstenberg's theorem only the following two properties of compact extensions are needed:
2) If \( \left( {\mathrm{X};\varphi }\right) \) is a not relatively weakly mixing extension of \( \left( {\mathrm{Y};\psi }\right) \), then there is an intermediate factor \( \left( {\mathrm{Z};\theta }\right) \) of \( \left( {\mathrm{X};\varphi }\right) \) which is a compact extension of \( \left( {\mathrm{Y};\psi }\right) \) .
3) If \( \left( {\mathrm{Z};\theta }\right) \) is a compact extension of \( \left( {\mathrm{Y};\psi }\right) \) and \( \left( {\mathrm{Y};\psi }\right) \) has the SZ-property, then so does \( \left( {\mathbf{Z};\theta }\right) \) .
Consider now the set \( \mathcal{F} \) of all factors with the SZ-property which is nonempty since it contains the trivial factor. The set \( \mathcal{F} \) can be partially ordered by the relation of "being a factor." By an argument using Zorn's lemma, after checking the chain condition, one finds a maximal element \( \left( {\mathrm{Y};\psi }\right) \) in \( \mathcal{F} \) . If \( \left( {\mathrm{X};\varphi }\right) \) is a weakly mixing extension of \( \left( {\mathrm{Y};\psi }\right) \), then by 1) above also \( \left( {\mathrm{X};\varphi }\right) \) has the SZ-property. Otherwise, by 2), there is a compact extension \( \left( {\mathrm{Z};\theta }\right) \) of \( \left( {\mathrm{Y};\psi }\right) \), which by 3 ) has the SZ-property, contradicting maximality.
## 20.5 The Furstenberg-Sárközy Theorem
We continue in the same spirit, but instead of arithmetic progressions we now look for pairs of the form \( \left\{ {a, a + {n}^{2}}\right\} \) for \( a, n \in \mathbb{N} \), see Furstenberg (1977) and Sárközy (1978a).
Theorem 20.21 (Furstenberg-Sárközy). If \( A \subseteq \mathbb{N} \) has positive upper density, then there exist \( a, n \in \mathbb{N} \) such that \( a, a + {n}^{2} \in A \) .
In order to prove the above theorem we first need an appropriate correspondence principle.
Theorem 20.22 (Furstenberg Correspondence Principle for Squares). If for every ergodic measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \), its Koopman operator \( T \mathrel{\text{:=}} {T}_{\varphi } \) on \( {\mathrm{L}}^{2}\left( \mathrm{X}\right) \) and every \( 0 < f \in {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) there exists \( n \in \mathbb{N} \) such that the condition
\[
{\int }_{\mathrm{X}}f \cdot \left( {{T}^{{n}^{2}}f}\right) > 0
\]
(20.18)
is satisfied, then Theorem 20.21 holds.
The proof of this correspondence principle is analogous to the one of Theorem 20.4. Then, to prove Theorem 20.21 one first shows that for a measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) and its Koopman operator \( T \mathrel{\text{:=}} {T}_{\varphi } \) the limit
\[
\mathop{\lim }\limits_{{N \rightarrow \infty }}\frac{1}{N}\mathop{\sum }\limits_{{n = 1}}^{N}f \cdot {T}^{{n}^{2}}f
\]
(20.19)
exists in \( {\mathrm{L}}^{2}\left( \mathrm{X}\right) \) for every \( f \in {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \), see Theorem 21.17 below for a more general case. To complete the proof it remains to establish the next result.
Proposition 20.23. Let \( \left( {\mathrm{X};\varphi }\right) \) be an ergodic measure-preserving system. Then the limit (20.19) is strictly positive for every \( 0 < f \in {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) .
The proof is analogous to that of Theorem 20.19 for \( k = 3 \) and again uses the decomposition \( {\mathrm{L}}^{2}\left( \mathrm{X}\right) = {E}_{\text{aws }} \oplus {E}_{\text{rev }} \), the vanishing of the above limit on \( {E}_{\text{aws }} \) and a relative denseness argument on \( {E}_{\mathrm{{rev}}} \), see Exercise 4.
A sequence \( {\left( {n}_{k}\right) }_{k \in \mathbb{N}} \) is called a Poincaré sequence (see, e.g., Def. 3.6 in Furstenberg (1981)) if for every measure-preserving system \( \left( {\mathrm{X};\varphi }\right) ,\mathrm{X} = \left( {X,\sum ,\mu }\right) \) , and \( A \in \sum \) with \( \mu \left( A\right) > 0 \) one has
\[
\mu \left( {A \cap {\varphi }^{-{n}_{k}}\left( A\right) }\right) > 0\;\text{ for some }k \in \mathbb{N}.
\]
In this case the set \( \left\{ {{n}_{k} : k \in \mathbb{N}}\right\} \) is called a set of recurrence. Poincaré’s Theorem 6.13 tells that \( {\left( n\right) }_{n \in \mathbb{N}} \) is a Poincaré sequence, explaining the terminology. One can prove that Proposition 20.23 remains valid even for not necessarily ergodic measure-preserving systems, so we obtain that \( {\left( {n}^{2}\right) }_{n \in \mathbb{N}} \) is a Poincaré sequence. More generally, we will see in the next chapter that \( {\left( p\left( n\right) \right) }_{n \in \mathbb{N}} \) is a Poincaré sequence for every integer polynomial \( p \) with \( p\left( 0\right) = 0 \), see also Exercise 17.
Furthermore, Sárközy (1978b) showed that Theorem 20.21 also remains valid if one replaces the set of differences \( \left\{ {{n}^{2} : n \in \mathbb{N}}\right\} \) by the shifted set of primes \( \mathbb{P} - 1 \) , i.e., if \( A \) has positive upper density then there are \( a \in \mathbb{N}, p \in \mathbb{P} \) with \( a, a + p - 1 \in A \) . As in the case of arithmetic progressions handled earlier in this chapter, one can establish a Furstenberg correspondence principle in both directions and show that Sárközy’s result is equivalent to the statement that \( \mathbb{P} - 1 \) is a set of recurrence. For more examples of sets of recurrence such as \( \mathbb{P} + 1 \) and sets coming from generalized polynomials as well as properties and related notions see, e.g., Bourgain (1987), Bergelson and Håland (1996), and Bergelson et al. (2014).
Host and Kra (2005a) proved convergence of
|
Theorem 20.20 (Roth). If \( A \subseteq \mathbb{N} \) has positive upper density, then \( A \) contains infinitely many arithmetic progressions of length 3.
|
null
|
Corollary 1.141. A multimap \( F : X \rightrightarrows Y \) between two metric spaces is c-subregular at \( \left( {\bar{x},\bar{y}}\right) \in F \) if and only if \( {F}^{-1} \) is \( c \) -calm at \( \left( {\bar{y},\bar{x}}\right) \) .
## 1.6.7 Characterizations of the Pseudo-Lipschitz Property
One can give a characterization of pseudo-Lipschitz behavior of a multimap \( F \) in terms of the function \( \left( {x, y}\right) \mapsto d\left( {y, F\left( x\right) }\right) \) or in terms of the distance function \( {d}_{F} \) to the graph of the multimap \( F \) given by
\[
{d}_{F}\left( {u, v}\right) \mathrel{\text{:=}} \inf \{ d\left( {\left( {u, v}\right) ,\left( {x, y}\right) }\right) : \left( {x, y}\right) \in F\} .
\]
In the sequel, we change the metric on \( X \times Y \) in order to be reduced to the simpler case of rates one: given \( c > 0 \), we set
\[
{d}_{c}\left( {\left( {x, y}\right) ,\left( {{x}^{\prime },{y}^{\prime }}\right) }\right) = \max \left( {{cd}\left( {x,{x}^{\prime }}\right), d\left( {y,{y}^{\prime }}\right) }\right) .
\]
(1.47)
Theorem 1.142. Given \( c > 0 \) and two metric spaces \( X, Y \) whose product is endowed with the metric \( {d}_{c} \), the following assertions about a multimap \( F : X \rightrightarrows Y \) and \( \left( {\bar{x},\bar{y}}\right) \in \) \( F \) are equivalent:
(a) For some \( N \in \mathcal{N}\left( {\bar{x},\bar{y}}\right) \) and all \( \left( {u, v}\right) \in N \) one has \( d\left( {v, F\left( u\right) }\right) \leq {d}_{c}\left( {\left( {u, v}\right), F}\right) \) ;
(b) For some \( N \in \mathcal{N}\left( {\bar{x},\bar{y}}\right) \) the function \( \left( {x, y}\right) \mapsto d\left( {y, F\left( x\right) }\right) \) is 1-Lipschitzian on \( N \) ;
(c) \( F \) is pseudo-Lipschitzian with rate \( c \) around \( \left( {\bar{x},\bar{y}}\right) \) .
Proof. We use the fact that for a subset \( F \) of a metric space \( Z \) and for a given neighborhood \( N \) of a point \( \bar{z} \in F \) there exists a neighborhood \( P \) of \( \bar{z} \) such that for all \( w \in P \) one has \( d\left( {w, F}\right) = d\left( {w, F \cap N}\right) \) . In fact, taking \( P \mathrel{\text{:=}} B\left( {\bar{z}, r}\right) \), where \( r > 0 \) is such that \( B\left( {\bar{z},{2r}}\right) \subset N \), for all \( w \in P, z \in F \smallsetminus N \) one has \( d\left( {w, z}\right) \geq d\left( {z,\bar{z}}\right) - d\left( {w,\bar{z}}\right) \geq {2r} - r \) , while \( d\left( {w, F}\right) \leq d\left( {w,\bar{z}}\right) < r \) .
(a) \( \Rightarrow \) (b) When (a) holds, for all \( \left( {x, y}\right) ,\left( {{x}^{\prime },{y}^{\prime }}\right) \in N \) one has \( d\left( {y, F\left( x\right) }\right) \leq {d}_{c}\left( {\left( {x, y}\right), F}\right) \) ,
\[
d\left( {y, F\left( x\right) }\right) - d\left( {{y}^{\prime }, F\left( {x}^{\prime }\right) }\right) \leq \inf \left\{ {\max \left( {{cd}\left( {x,{x}^{\prime }}\right), d\left( {y, z}\right) }\right) : z \in F\left( {x}^{\prime }\right) }\right\} - d\left( {{y}^{\prime }, F\left( {x}^{\prime }\right) }\right)
\]
\[
\leq \max \left( {{cd}\left( {x,{x}^{\prime }}\right), d\left( {y, F\left( {x}^{\prime }\right) }\right) }\right) - d\left( {{y}^{\prime }, F\left( {x}^{\prime }\right) }\right)
\]
\[
\leq \max \left( {{cd}\left( {x,{x}^{\prime }}\right), d\left( {y,{y}^{\prime }}\right) }\right) .
\]
(b) \( \Rightarrow \) (c) Assume (b) holds and take a neighborhood \( P \mathrel{\text{:=}} U \times V \) of \( \bar{z} \mathrel{\text{:=}} \left( {\bar{x},\bar{y}}\right) \) associated to \( N \) as in the preliminary part of the proof, so that for every \( w \mathrel{\text{:=}} \left( {u, v}\right) \in \) \( P \) one has \( d\left( {w, F}\right) = d\left( {w, F \cap N}\right) \) . Then for all \( u, x \in U, v \in F\left( x\right) \cap V \) one has
\[
d\left( {v, F\left( u\right) }\right) \leq d\left( {v, F\left( x\right) }\right) + {d}_{c}\left( {\left( {u, v}\right) ,\left( {x, v}\right) }\right) = {cd}\left( {u, x}\right) .
\]
Thus, by (1.46), \( F \) is pseudo-Lipschitzian with rate \( c \) around \( \left( {\bar{x},\bar{y}}\right) \) .
(c) \( \Rightarrow \) (a) Let \( U \in \mathcal{N}\left( \bar{x}\right), V \in \mathcal{N}\left( \bar{y}\right) \) be such that for every \( u, x \in U, v \in F\left( x\right) \cap V \) , one has \( d\left( {v, F\left( u\right) }\right) \leq {cd}\left( {u, x}\right) \) . Let \( {U}^{\prime } \in \mathcal{N}\left( \bar{x}\right) ,{V}^{\prime } \in \mathcal{N}\left( \bar{y}\right) \) be such that for all \( \left( {u, v}\right) \in \) \( {U}^{\prime } \times {V}^{\prime } \) one has \( {d}_{c}\left( {\left( {u, v}\right), F}\right) = {d}_{c}\left( {\left( {u, v}\right), F \cap \left( {U \times V}\right) }\right) \) . Since for all \( y \in F\left( x\right) \) we have \( {cd}\left( {x, u}\right) \leq {d}_{c}\left( {\left( {u, v}\right) ,\left( {x, y}\right) }\right) \), taking the infimum over \( \left( {x, y}\right) \in F \cap \left( {U \times V}\right) \), we get \( d\left( {v, F\left( u\right) }\right) \leq {d}_{c}\left( {\left( {u, v}\right), F}\right) \) .
## 1.6.8 Supplement: Convex-Valued Pseudo-Lipschitzian Multimaps
Pseudo-Lipschitzian multimaps with convex values in a normed space can be characterized in a simple way (the statement below is a characterization because any Lipschitzian multimap is obviously pseudo-Lipschitzian).
Proposition 1.143. Let \( X \) be a metric space, let \( Y \) be a normed space, let \( F : X \rightrightarrows Y \) be a multimap with convex values, and let \( \bar{x} \in X,\bar{y} \in F\left( \bar{x}\right) \) . If \( F \) is pseudo-Lip-schitzian around \( \left( {\bar{x},\bar{y}}\right) \), then for some ball \( B \) with center \( \bar{y} \) the multimap \( G \) given by \( G\left( x\right) \mathrel{\text{:=}} F\left( x\right) \cap B \) is Lipschitzian. More precisely, if for some \( q, r,\ell \in \mathbb{P} \mathrel{\text{:=}} \left( {0, + \infty }\right) \) , one has
\[
{e}_{H}\left( {F\left( x\right) \cap B\left\lbrack {\bar{y}, r}\right\rbrack, F\left( {x}^{\prime }\right) }\right) \leq \ell d\left( {x,{x}^{\prime }}\right) \;\forall x,{x}^{\prime } \in B\left\lbrack {\bar{x}, q}\right\rbrack ,
\]
then for \( B \mathrel{\text{:=}} B\left\lbrack {\bar{y}, r}\right\rbrack, G\left( \cdot \right) \mathrel{\text{:=}} F\left( \cdot \right) \cap B \) and \( p < \min \left( {{\ell }^{-1}r, q}\right) \), one has
\[
{d}_{H}\left( {G\left( x\right), G\left( {x}^{\prime }\right) }\right) \leq {2r}{\left( r - \ell p\right) }^{-1}\ell d\left( {x,{x}^{\prime }}\right) \;\forall x,{x}^{\prime } \in B\left\lbrack {\bar{x}, p}\right\rbrack .
\]
Proof. Without loss of generality we may assume that \( \bar{y} = 0 \) . Taking \( x \mathrel{\text{:=}} \bar{x} \), and observing that \( \bar{y} \in F\left( \bar{x}\right) \cap B\left\lbrack {\bar{y}, r}\right\rbrack \), we get that for all \( {x}^{\prime } \in B\left\lbrack {\bar{x}, q}\right\rbrack, F\left( {x}^{\prime }\right) \) is nonempty.
Let \( k \in \left( {\ell ,{p}^{-1}r}\right) \) . Given \( x,{x}^{\prime } \in B\left\lbrack {\bar{x}, p}\right\rbrack \), let us prove that for all \( {y}^{\prime } \in G\left( {x}^{\prime }\right) \) we have \( d\left( {{y}^{\prime }, G\left( x\right) }\right) \leq {2r}{\left( r - kp\right) }^{-1}{kd}\left( {x,{x}^{\prime }}\right) \), which will ensure that
\[
{e}_{H}\left( {G\left( {x}^{\prime }\right), G\left( x\right) }\right) \leq {2r}{\left( r - kp\right) }^{-1}{kd}\left( {x,{x}^{\prime }}\right) .
\]
(1.48)
The result will follow from the symmetry of the roles of \( x \) and \( {x}^{\prime } \) and by taking the infimum over \( k \in \left( {\ell ,{p}^{-1}r}\right) \) . Given \( {y}^{\prime } \in G\left( {x}^{\prime }\right) \), we can pick \( w, z \in F\left( x\right) \) such that \( \parallel w\parallel \leq {kd}\left( {x,\bar{x}}\right) \leq {kp} \) and \( \begin{Vmatrix}{z - {y}^{\prime }}\end{Vmatrix} \leq {k\delta } \) for \( \delta \mathrel{\text{:=}} d\left( {x,{x}^{\prime }}\right) \) . If \( z \in B\left\lbrack {0, r}\right\rbrack \), the expected inequality is satisfied, since \( d\left( {{y}^{\prime }, G\left( x\right) }\right) \leq \begin{Vmatrix}{z - {y}^{\prime }}\end{Vmatrix} \leq {kd}\left( {x,{x}^{\prime }}\right) \) and \( {2r}{\left( r - kp\right) }^{-1} \geq 1 \) . Suppose \( s \mathrel{\text{:=}} \parallel z\parallel > r \) . Let \( y \mathrel{\text{:=}} {tw} + \left( {1 - t}\right) z \), with \( t \mathrel{\text{:=}} \left( {s - r}\right) {\left( s - kp\right) }^{-1} \) . Then \( t \in \left\lbrack {0,1}\right\rbrack, y \in F\left( x\right) \), and since \( s - r \mathrel{\text{:=}} \parallel z\parallel - r \leq \begin{Vmatrix}{z - {y}^{\prime }}\end{Vmatrix} + \begin{Vmatrix}{y}^{\prime }\end{Vmatrix} - r \leq {k\delta },\parallel w\parallel \leq {kp} \) , \( \parallel z\parallel = s \), we have
\[
\parallel y\parallel \leq \left( {s - r}\right) {\left( s - kp\right) }^{-1}{kp} + \left( {r - {kp}}\right) {\left( s - kp\right) }^{-1}s \leq r
\]
and
\[
\parallel z - y\parallel = t\parallel z - w\parallel \leq \left( {s - r}\right) {\left( s - kp\right) }^{-1}\left( {s + {kp}}\right) \leq {\left( s - kp\right) }^{-1}\left( {s + {kp}}\right) {k\delta };
\]
hence since \( \begin{Vmatrix}{z - {y}^{\prime }}\end{Vmatrix} \leq {k\delta },{\left( s - kp\right) }^{-1}\left( {s + {kp}}\right) {k\delta } + {k\delta } = {2s}{\left( s - kp\right) }^{-1}{k\delta } \) and since \( u \mapsto u{\left( u - kp\right) }^{-1} \) is nonincreasing,
\[
\begin{Vmatrix}{y - {y}^{\prime }}\end{Vmatrix} \leq \parallel y - z\parallel + \begin{Vmatrix}{z - {y}^{\prime }}\end{Vmatrix} \leq {2s}{\left( s - kp\right) }^{-1}{k\delta } \leq {2r}{\left( r - kp\right) }^{-1}{k\delta }.
\]
## 1.6.9 Calmness and Metric Regularity Criteria
Now we devise criteria for calmness and metric regularity. Since these concepts do not require any linear structure, it is appropriate first to present criteria in terms of metric structures. Later on, we
|
Corollary 1.141. A multimap \( F : X \rightrightarrows Y \) between two metric spaces is c-subregular at \( \left( {\bar{x},\bar{y}}\right) \in F \) if and only if \( {F}^{-1} \) is \( c \) -calm at \( \left( {\bar{y},\bar{x}}\right) \) .
|
null
|
Proposition 7.22. We have
\[
{B}_{n} = \mathop{\sum }\limits_{{\pi \in \bar{S}\left( n\right) }}W\left( \pi \right)
\]
(13)
The proof of recurrence (6) for \( \mathop{\sum }\limits_{{\pi \in \bar{S}\left( n\right) }}W\left( \pi \right) \) is analogous, by considering separately the cases \( {\pi }_{n + 1} = n + 1 \) and \( {\pi }_{n + 1} \neq n + 1 \), and is left to the exercises.
Specialization leads again to interesting coefficients appearing as Catalan numbers.
Examples.
1. \( a = s = 0, b = u = 1 \) . Looking at the definition (12) of \( W\left( \pi \right) \) , this gives all permutations in \( \bar{S}\left( n\right) \) without double rises, that is, all alternating permutations, starting with a descent (because of \( {\pi }_{0} = \) 0) and ending with a descent \( \left( {{\pi }_{n + 1} = n + 1}\right) \) 
hence \( {B}_{{2n} + 1} = 0 \) . The associated sequences are \( \sigma \equiv 0,\tau = \left( {{t}_{k} = }\right. \) \( \left. {k}^{2}\right) \), and the Catalan numbers \( {B}_{n} = \left( {1,0,1,0,5,0,{61},0,\ldots }\right) \) are called the secant numbers \( {B}_{n} = {se}{c}_{n} \) for the following reason. Look at the differential equation in (13) of the last section:
\[
{F}^{\prime } = 1 + {F}^{2},\;{B}^{\prime } = {BF}.
\]
One easily computes \( F\left( z\right) = \tan z,\frac{B{\left( z\right) }^{\prime }}{B\left( z\right) } = \tan z \), and hence \( B\left( z\right) = \) \( \sec z = \frac{1}{\cos z} \) . We thus get the famous result that in the expansion
\[
\frac{1}{\cos z} = \mathop{\sum }\limits_{{n \geq 0}}{\sec }_{n}\frac{{z}^{n}}{n!}
\]
the coefficients \( {se}{c}_{n} \) count precisely the alternating permutations of even length starting with a descent. For \( n = 4 \) we have \( {se}{c}_{4} = 5 \) with the permutations
2143, 3142, 3241, 4132, 4231.
2. For \( s = 2, u = 1 \), the differential equation for \( F\left( z\right) \) is \( {F}^{\prime } = \) \( 1 + {2F} + {F}^{2} \) . We have solved this equation already in the last section, \( F\left( z\right) = \frac{z}{1 - z} = \mathop{\sum }\limits_{{n \geq 0}}{z}^{n + 1} = \mathop{\sum }\limits_{{n \geq 1}}n!\frac{{z}^{n}}{n!} \) ; hence \( {F}_{n} = n \) ! for \( n \geq 1 \) . Recurrence (6) therefore reads
\[
{B}_{n + 1} = a{B}_{n} + b\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}\left( \begin{array}{l} n \\ k \end{array}\right) {B}_{k}\left( {n - k}\right) !.
\]
Now define the new weight \( \bar{W}\left( \pi \right) \) for \( \pi \in S\left( n\right) \) by
\[
\bar{W}\left( \pi \right) = {a}^{\ell }{b}^{m}
\]
where \( \ell = \# \) fixed points, \( m = \# \) cycles of length greater than or equal to 2. Then it is an easy matter to show that (see Exercise 7.58)
\[
{B}_{n} = \mathop{\sum }\limits_{{\pi \in S\left( n\right) }}\bar{W}\left( \pi \right)
\]
(14)
From this we deduce the following examples:
\[
s = 2, u = 1\text{:}
\]
\[
a = b = 1\;\sigma = \left( {{2k} + 1}\right) \;{B}_{n} = n!\;\text{ all permutations,}
\]
\[
\tau = \left( {k}^{2}\right)
\]
\[
a = 0, b = {1\sigma } = \left( {2k}\right) \;{B}_{n} = {D}_{n}\;\text{derangements,}
\]
\[
\tau = \left( {k}^{2}\right)
\]
\[
a = b\;\sigma = \left( {a + {2k}}\right) \;{B}_{n} = \sum {s}_{n, k}{a}^{k}\text{ Stirling polynomial. }
\]
\[
\tau = \left( {k\left( {a + k - 1}\right) }\right)
\]
3. Suppose \( a = s, b = {2u} \) . The differential equations are
\[
{F}^{\prime } = 1 + {sF} + u{F}^{2},
\]
\[
{B}^{\prime } = {sB} + {2uBF}\text{.}
\]
Differentiating the first equation, we get \( {F}^{\prime \prime } = s{F}^{\prime } + {2uF}{F}^{\prime } \), from
which \( B = {F}^{\prime } \) results because of \( {F}_{1} = {B}_{0} = 1 \), and this means that \( {B}_{n} = {F}_{n + 1} \) . In particular, for \( a = s = 0, u = 1 \), we get \( {F}^{\prime } = 1 + {F}^{2} \) , \( F\left( z\right) = \tan z \), and thus \( B\left( z\right) = {\left( \tan z\right) }^{\prime } = \frac{1}{{\cos }^{2}z} \) . Looking at (12), we find that \( {B}_{n} = {F}_{n + 1} \) counts all alternating permutations in \( S\left( {n + 1}\right) \) starting with a rise \( \left( {{\pi }_{0} = n + 1}\right) \) and ending with a fall \( \left( {{\pi }_{n + 2} = }\right. \) \( n + 2) \) :

The numbers \( {F}_{n} \) appearing in the expansion of \( \tan z \) are called the tangent numbers \( {\tan }_{n} \), with small values \( (0,1,0,2,0,{16},0,{272} \) , \( 0,\ldots ) \), and we have
\[
B\left( z\right) = \frac{1}{{\cos }^{2}z} = \mathop{\sum }\limits_{{n \geq 0}}{\tan }_{n + 1}\frac{{z}^{n}}{n!}.
\]
For \( n = 2 \) we obtain the two permutations 132,231, and for \( n = 4 \) the 16 alternating permutations
\[
\begin{array}{lllll} {13254} & {14253} & {14352} & {15243} & {15341} \end{array}
\]
\[
\text{23154 24153 24351 25143 25341}
\]
34152 34251 35142 35241
45132 45231.
## Exercises
7.52 Verify formula (3) for \( B\left( z\right) \) .
7.53 Compute the ordinary generating function \( B\left( z\right) \) corresponding to \( a = s - 1, b = u \) . Specialize to the Schröder and Riordan numbers.
\( \vartriangleright \) 7.54 Prove the recurrence for the Schröder numbers in the text, and the identity \( {\operatorname{Sch}}_{n} = \mathop{\sum }\limits_{{k \geq 0}}\left( \begin{matrix} {2n} - k \\ k \end{matrix}\right) {C}_{n - k},{C}_{k} = \) Catalan. Show \( {\operatorname{Sch}}_{n} \equiv 2\left( {\;\operatorname{mod}\;4}\right) \) for \( n \geq 1 \) .
7.55 Prove Lemma 7.19.
7.56 Establish for the central trinomial numbers the formula \( T{r}_{n} = \) \( \mathop{\sum }\limits_{{k = 0}}^{n}\left( \begin{matrix} {2k} \\ k \end{matrix}\right) \left( \begin{matrix} n \\ {2k} \end{matrix}\right) \)
7.57 Show the equality \( {B}_{n} = \mathop{\sum }\limits_{{\pi \in \bar{S}\left( n\right) }}W\left( \pi \right) \) in (13).
\( \vartriangleright \) 7.58 Check the equality \( {B}_{n} = \mathop{\sum }\limits_{{\pi \in S\left( n\right) }}\bar{W}\left( \pi \right) \) in (14).
7.59 Suppose \( B\left( z\right) = \mathop{\sum }\limits_{{n \geq 0}}{B}_{n}{z}^{n} \) for the sequences \( \sigma = \left( {0, s}\right) ,\tau \equiv 1 \), and \( \widetilde{B}\left( z\right) = \mathop{\sum }\limits_{{n \geq 0}}{\widetilde{B}}_{n}{z}^{n} \) for \( \sigma = \left( {s, s}\right) ,\tau \equiv 1 \) . Prove that \( {\widetilde{B}}_{n} = {B}_{n} + s{B}_{n + 1} \), and deduce \( {M}_{n} = {R}_{n} + {R}_{n + 1},{C}_{n} = F{i}_{n - 1} + {2F}{i}_{n}\left( {n \geq 1}\right) \) .
\( \vartriangleright {7.60} \) Consider the ballot numbers \( {b}_{n, k} \) of Exercises \( {7.14}\mathrm{{ff}} \) . We have proved \( {B}_{k}\left( z\right) = {z}^{k}C{\left( z\right) }^{k + 1} \) . Use this and the previous exercise to show that \( F{i}_{n + 1} = \mathop{\sum }\limits_{{k\text{ odd }}}{b}_{n, k} \) .
7.61 Let \( \tau \equiv 1 \), and consider the sequences \( \sigma = \left( {s, s}\right) ,\sigma = \left( {s + 1, s}\right) \) , \( \sigma = \left( {s - 1, s}\right) \) . Prove the recurrences
\[
{B}_{n + 1}^{\left( s, s\right) } = s{B}_{n}^{\left( s, s\right) } + \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{B}_{k}^{\left( s, s\right) }{B}_{n - 1 - k}^{\left( s, s\right) },
\]
\[
{B}_{n + 1}^{\left( s + 1, s\right) } = {\left( s + 2\right) }^{n + 1} - \mathop{\sum }\limits_{{k = 0}}^{n}{B}_{k}^{\left( s + 1, s\right) }{B}_{n - k}^{\left( s + 1, s\right) },
\]
\[
{B}_{n + 1}^{\left( s - 1, s\right) } = {\left( s - 2\right) }^{n + 1} + \mathop{\sum }\limits_{{k = 0}}^{n}{B}_{k}^{\left( s - 1, s\right) }{B}_{n - k}^{\left( s - 1, s\right) }.
\]
Apply these recurrences to known sequences.
\( \vartriangleright \) 7.62 Let \( {B}^{\left( s, s\right) }\left( z\right) ,{B}^{\left( s + 1, s\right) }\left( z\right) \), and \( {B}^{\left( s - 1, s\right) }\left( z\right) \) be the series as in the previous exercise, and show that
\[
{B}^{\left( s + 1, s\right) }\left( z\right) = \frac{1}{1 - \left( {s + 2}\right) z}\left( {1 - z{B}^{\left( s, s\right) }\left( z\right) }\right) ,
\]
\[
{B}^{\left( s - 1, s\right) }\left( z\right) = \frac{1}{1 - \left( {s - 2}\right) z}\left( {1 + z{B}^{\left( s, s\right) }\left( z\right) }\right) .
\]
Verify the examples: \( \left( \begin{matrix} n \\ \lfloor n/2\rfloor \end{matrix}\right) = {2}^{n} - \mathop{\sum }\limits_{{k = 0}}^{{\lfloor \left( {n - 1}\right) /2\rfloor }}{2}^{n - 1 - {2k}}{C}_{k},{R}_{n} = {M}_{n - 1} - \) \( {M}_{n - 2} \pm \cdots + {\left( -1\right) }^{n}{M}_{1},{R}_{n} = \) Riordan, \( {M}_{n} = \) Motzkin.
7.63 Use the previous exercise to deduce an identity relating the Catalan and Motzkin numbers: \( \mathop{\sum }\limits_{{k = 1}}^{n}\left( {{\left( -1\right) }^{k}\left( \begin{array}{l} n \\ k \end{array}\right) {C}_{k} + {M}_{k - 1}}\right) {3}^{n - k} = 0\left( {n \geq 1}\right) \) .
7.64 Consider the case \( \sigma = \left( {a, s}\right) ,\tau = \left( {b, u}\right) \) with \( a = s, b = {2u} \) , treated in the text, with the trinomials, \( \left( \begin{matrix} {2n} \\ n \end{matrix}\right) \), and the Delannoy numbers as special instances. Prove the general recurrence \( \left( {n + 1}\right) {B}_{n + 1} = \) \( s\left( {{2n} + 1}\right) {B}_{n} + n\left( {{2b} - {s}^{2}}\right) {B}_{n - 1} \) for the corresponding Catalan numbers, and deduce the formula \( {B}_{n} = \frac{1}{{2}^{n}}\mathop{\sum }\limits_{{k \geq 0}}\left( \begin{matrix} {2k} \\ k \end{matrix}\right) \left( \begin{matrix} k \\ n - k \end{matrix}\right) {s}^{{2k} - n}{\left( 2b - {s}^{2}\right) }^{n - k} \) .
Hint: Proceed as for the Delannoy numbers in Section 3.2.
\( \vartriangleright \) 7.65 Consider \( \sigma \equiv s,\tau \equiv u \), and prove the following recurrence for the Catalan numbers: \( \left( {n + 2}\right) {B}_{n} = s\left( {{2n} + 1}\right) {B}_{n - 1} + \left( {{4u} - {s}^{2}}\right) \left( {n - 1}\right) {B}_{n - 2}\left( {n \geq 1}\right) \) . Deduce that the sequence
|
Proposition 7.22. We have
\[
{B}_{n} = \mathop{\sum }\limits_{{\pi \in \bar{S}\left( n\right) }}W\left( \pi \right)
\]
|
The proof of recurrence (6) for \( \mathop{\sum }\limits_{{\pi \in \bar{S}\left( n\right) }}W\left( \pi \right) \) is analogous, by considering separately the cases \( {\pi }_{n + 1} = n + 1 \) and \( {\pi }_{n + 1} \neq n + 1 \), and is left to the exercises.
|
Theorem 4.8.4 ([Lit 50]) We have
\[
\mathop{\sum }\limits_{\lambda }{s}_{\lambda }\left( \mathbf{x}\right) {s}_{\lambda }\left( \mathbf{y}\right) = \mathop{\prod }\limits_{{i, j \geq 1}}\frac{1}{1 - {x}_{i}{y}_{j}}.
\]
Note that just as the Robinson-Schensted correspondence is gotten by restricting Knuth's generalization to the case where all entries are distinct, we can obtain
\[
n! = \mathop{\sum }\limits_{{\lambda \vdash n}}{\left( {f}^{\lambda }\right) }^{2}
\]
by taking the coefficient of \( {x}_{1}\cdots {x}_{n}{y}_{1}\cdots {y}_{n} \) on both sides of Theorem 4.8.4.
Because the semistandard condition does not treat rows and columns uniformly, there is a second algorithm related to the one just given. It is called the dual map. (This is a different notion of "dual" from the one introduced in Chapter 3, e.g., Definition 3.6.8.) Let \( {\mathrm{{GP}}}^{\prime } \) denote all those permutations in GP where no column is repeated. These correspond to the \( 0 - 1 \) matrices in Mat.
Theorem 4.8.5 ([Knu 70]) There is a bijection between \( \pi \in {\mathrm{{GP}}}^{\prime } \) and pairs \( \left( {T, U}\right) \) of tableaux of the same shape with \( T,{U}^{t} \) semistandard,
\[
\pi \overset{R - S - {K}^{\prime }}{ \leftrightarrow }\left( {T, U}\right)
\]
such that \( \operatorname{cont}\check{\pi } = \operatorname{cont}T \) and \( \operatorname{cont}\widehat{\pi } = \operatorname{cont}U \) .
Proof. " \( \pi \overset{\mathrm{R} - \mathrm{S} - {\mathrm{K}}^{\prime }}{ \rightarrow }\left( {T, U}\right) \) " We merely replace row insertion in the R-S-K correspondence with a modification of column insertion. This is done by insisting that at each stage the element entering a column displaces the smallest entry greater than or equal to it. For example,
\[
{c}_{2}\left( \begin{array}{lll} 1 & 1 & 3 \\ 2 & 3 & \\ 3 & & \end{array}\right) = \begin{array}{llll} 1 & 1 & 3 & 3 \\ 2 & 2 & & \\ 3 & & & \end{array}.
\]
Note that this is exactly what is needed to ensure that \( T \) will be column-strict. The fact that \( U \) will be row-strict follows because a subsequence of \( \check{\pi } \) corresponding to equal elements in \( \widehat{\pi } \) must be strictly increasing (since \( \pi \in {\mathrm{{GP}}}^{\prime } \) ).
" \( \left( {T, U}\right) \overset{\mathrm{K} - \mathrm{S} - {\mathrm{R}}^{\prime }}{ \rightarrow }\pi \) " The details of the step-by-step reversal and verification that \( \pi \in {\mathrm{{GP}}}^{\prime } \) are routine. ∎
Taking generating functions with the same weights as before yields the dual Cauchy identity.
Theorem 4.8.6 ([Lit 50]) We have
\[
\mathop{\sum }\limits_{\lambda }{s}_{\lambda }\left( \mathbf{x}\right) {s}_{{\lambda }^{\prime }}\left( \mathbf{y}\right) = \mathop{\prod }\limits_{{i, j \geq 1}}\left( {1 + {x}_{i}{y}_{j}}\right)
\]
where \( {\lambda }^{\prime } \) is the conjugate of \( \lambda \) . ∎
Most of the results of Chapter 3 about Robinson-Schensted have generalizations for the Knuth map. We survey a few of them next.
Taking the inverse of a permutation corresponds to transposing the associated permutation matrix. So the following strengthening of Schützenberger's Theorem 3.6.6 should come as no surprise.
Theorem 4.8.7 If \( M \in \operatorname{Mat} \) and \( M\overset{\mathrm{R} - \mathrm{S} - \mathrm{K}}{ \leftrightarrow }\left( {T, U}\right) \), then
\[
{M}^{t}\overset{\mathrm{R} - \mathrm{S} - \mathrm{K}}{ \leftrightarrow }\left( {U, T}\right) \text{. ∎}
\]
We can also deal with the reversal of a generalized permutation. Row and modified column insertion commute as in Proposition 3.2.2. So we obtain the following analogue of Theorem 3.2.3.
Theorem 4.8.8 If \( \pi \in \mathrm{{GP}} \), then \( T\left( {\check{\pi }}^{r}\right) = {T}^{\prime }\left( \check{\pi }\right) \), where \( {T}^{\prime } \) denotes modified column insertion. ∎
The Knuth relations become
\[
\text{replace}{xzy}\text{by}{zxy}\text{if}x \leq y < z
\]
and
\[
\text{replace}{yxz}\text{by}{yzx}\text{if}x < y \leq z\text{.}
\]
Theorem 3.4.3 remains true.
Theorem 4.8.9 ([Knu 70]) A pair of generalized permutations are Knuth equivalent if and only if they have the same \( T \) -tableau. ∎
Putting together the last two results, we can prove a stronger version of Greene's theorem.
Theorem 4.8.10 ([Gre 74]) Given \( \pi \in \mathrm{{GP}} \), let \( \operatorname{sh}T\left( \pi \right) = \left( {{\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{l}}\right) \) with conjugate \( \left( {{\lambda }_{1}^{\prime },{\lambda }_{2}^{\prime },\ldots ,{\lambda }_{m}^{\prime }}\right) \) . Then for any \( k,{\lambda }_{1} + {\lambda }_{2} + \cdots + {\lambda }_{k} \) and \( {\lambda }_{1}^{\prime } + {\lambda }_{2}^{\prime } + \cdots + {\lambda }_{k}^{\prime } \) give the lengths of the longest weakly \( k \) -increasing and strictly \( k \) -decreasing subsequences of \( \pi \), respectively. -
For the jeu de taquin, we need to break ties when the two elements of \( T \) adjacent to the cell to be filled are equal. The correct choice is forced on us by semistandardness. In this case, both the forward and backward slides always move the element that changes rows rather than the one that would change columns. The fundamental results of Schützenberger continue to hold (see Theorems 3.7.7 and 3.7.8).
Theorem 4.8.11 ([Scii 76]) Let \( T \) and \( U \) be skew semistandard tableaux. Then \( T \) and \( U \) have Knuth equivalent row words if and only if they are connected by a sequence of slides. Furthermore, any such sequence bringing them to normal shape results in the first output tableau of the Robinson-Schensted- \( K{nuth} \) correspondence. ∎
Finally, we can define dual equivalence, \( \cong \), in exactly the same way as before (Definition 3.8.2). The result concerning this relation, analogous to Proposition 3.8.1 and Theorem 3.8.8, needed for the Littlewood-Richardson rule is the following.
Theorem 4.8.12 If \( T \) and \( U \) are semistandard of the same normal shape, then \( T \cong U \) . ∎
## 4.9 The Littlewood-Richardson Rule
The Littlewood-Richardson rule gives a combinatorial interpretation to the coefficients of the product \( {s}_{\mu }{s}_{\nu } \) when expanded in terms of the Schur basis. This can be viewed as a generalization of Young's rule, as follows.
We know (Theorem 2.11.2) that
\[
{M}^{\mu } \cong {\bigoplus }_{\lambda }{K}_{\lambda \mu }{S}^{\lambda }
\]
(4.25)
where \( {K}_{\lambda \mu } \) is the number of semistandard tableaux of shape \( \lambda \) and content \( \mu \) . We can look at this formula from two other perspectives: in terms of characters or symmetric functions.
If \( \mu \vdash n \), then \( {M}^{\mu } \) is a module for the induced character \( {1}_{{\mathcal{S}}_{\mu }}{ \uparrow }^{{\mathcal{S}}_{n}} \) . But from the definitions of the trivial character and the tensor product, we have
\[
{1}_{{\mathcal{S}}_{\mu }} = {1}_{{\mathcal{S}}_{{\mu }_{1}}} \otimes {1}_{{\mathcal{S}}_{{\mu }_{2}}} \otimes \cdots \otimes {1}_{{\mathcal{S}}_{{\mu }_{m}}}
\]
where \( \mu = \left( {{\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{m}}\right) \) . Using the product in the class function algebra \( R \) (and the transitivity of induction, Exercise 18 of Chapter 1), we can rewrite (4.25) as
\[
{1}_{{\mathcal{S}}_{{\mu }_{1}}} \cdot {1}_{{\mathcal{S}}_{{\mu }_{2}}}\cdots {1}_{{\mathcal{S}}_{{\mu }_{m}}} = \mathop{\sum }\limits_{\lambda }{K}_{\lambda \mu }{\chi }^{\lambda }
\]
To bring in symmetric functions, apply the characteristic map to the previous equation (remember that the trivial representation corresponds to an irreducible whose diagram has only one row):
\[
{s}_{\left( {\mu }_{1}\right) }{s}_{\left( {\mu }_{2}\right) }\cdots {s}_{\left( {\mu }_{m}\right) } = \mathop{\sum }\limits_{\lambda }{K}_{\lambda \mu }{s}_{\lambda }
\]
For example,
\[
{M}^{\left( 3,2\right) } = {S}^{\left( 3,2\right) } + {S}^{\left( 4,1\right) } + {S}^{\left( 5\right) }
\]
with the relevant tableaux being
\[
T : \begin{array}{lll} 1 & 1 & 1 \\ 2 & 2 & \end{array},\;\begin{array}{llll} 1 & 1 & 1 & 2 \\ 2 & & & \end{array},\;\begin{array}{lllll} 1 & 1 & 1 & 2 & 2 \end{array}.
\]
This can be rewritten as
\[
{1}_{{\mathcal{S}}_{3}} \cdot {1}_{{\mathcal{S}}_{2}} = {\chi }^{\left( 3,2\right) } + {\chi }^{\left( 4,1\right) } + {\chi }^{\left( 5\right) }
\]
or
\[
{s}_{\left( 3\right) }{s}_{\left( 2\right) } = {s}_{\left( 3,2\right) } + {s}_{\left( 4,1\right) } + {s}_{\left( 5\right) }.
\]
What happens if we try to compute the expansion
\[
{s}_{\mu }{s}_{\nu } = \mathop{\sum }\limits_{\lambda }{c}_{\mu \nu }^{\lambda }{s}_{\lambda }
\]
(4.26)
where \( \mu \) and \( \nu \) are arbitrary partitions? Equivalently, we are asking for the multiplicities of the irreducibles in
\[
{\chi }^{\mu } \cdot {\chi }^{\nu } = \mathop{\sum }\limits_{\lambda }{c}_{\mu \nu }^{\lambda }{\chi }^{\lambda }
\]
or
\[
\left( {{S}^{\mu } \otimes {S}^{\nu }}\right) { \uparrow }^{{\mathcal{S}}_{n}} = {\bigoplus }_{\lambda }{c}_{\mu \nu }^{\lambda }{S}^{\lambda }
\]
where \( \left| \mu \right| + \left| \nu \right| = n \) . The \( {c}_{\mu \nu }^{\lambda } \) are called the Littlewood-Richardson coefficients. The importance of the Littlewood-Richardson rule that follows is that it gives a way to interpret these coefficients combinatorially, just as Young's rule does for one-rowed partitions.
We need to explore one other place where these coefficients arise: in the expansion of skew Schur functions. Obviously, the definition of \( {s}_{\lambda }\left( \mathbf{x}\right) \) given in Section 4.4 makes sense if \( \lambda \) is replaced by a skew diagram. Furthermore, the resulting function \( {s}_{\lambda /\mu }\left( \mathbf{x}\right) \) is still symmetric by the same reasoning as in Proposition 4.4.2. We can derive an implicit formula for these new Schur functions in terms of the old ones by introducing another set of indeterminates \( \mathbf{y} = \left( {{y}_{1},{y}_{2},\ldots }\right) \)
Proposition
|
Theorem 4.8.4 ([Lit 50]) We have
\[
\mathop{\sum }\limits_{\lambda }{s}_{\lambda }\left( \mathbf{x}\right) {s}_{\lambda }\left( \mathbf{y}\right) = \mathop{\prod }\limits_{{i, j \geq 1}}\frac{1}{1 - {x}_{i}{y}_{j}}.
\]
|
null
|
Proposition 3.4. Let \( B \) be a \( \Lambda \) -module and \( \left\{ {A}_{j}\right\}, j \in J \) be a family of \( \Lambda \) - modules. Then there is an isomorphism
\[
\eta : {\operatorname{Hom}}_{A}\left( {{\bigoplus }_{j \in J}{A}_{j}, B}\right) \rightarrow \mathop{\prod }\limits_{{j \in J}}{\operatorname{Hom}}_{A}\left( {{A}_{j}, B}\right) .
\]
Proof. The proof reveals that this theorem is merely a restatement of the universal property of the direct sum. For \( \psi : {\bigoplus }_{j \in J}{A}_{j} \rightarrow B \), define \( \eta \left( \psi \right) = {\left( \psi {\iota }_{j} : {A}_{j} \rightarrow B\right) }_{j \in J} \) . Conversely a family \( \left\{ {{\psi }_{j} : {A}_{j} \rightarrow B}\right\}, j \in J \), gives rise to a unique map \( \psi : {\bigoplus }_{j \in J}{A}_{j} \rightarrow B \) . The projections \( {\pi }_{j} : \mathop{\prod }\limits_{{j \in J}}{\operatorname{Hom}}_{A}\left( {{A}_{j}, B}\right) \) \( \rightarrow {\operatorname{Hom}}_{\Lambda }\left( {{A}_{j}, B}\right) \) are given by \( {\pi }_{j}\eta = {\operatorname{Hom}}_{\Lambda }\left( {{\iota }_{j}, B}\right) \) . Analogously one proves:
Proposition 3.5. Let \( A \) be a \( \Lambda \) -module and \( \left\{ {B}_{j}\right\}, j \in J \) be a family of \( \Lambda \) -modules. Then there is an isomorphism
\[
\zeta : {\operatorname{Hom}}_{\Lambda }\left( {A,\mathop{\prod }\limits_{{j \in J}}{B}_{j}}\right) \overset{ \sim }{ \rightarrow }\mathop{\prod }\limits_{{j \in J}}{\operatorname{Hom}}_{\Lambda }\left( {A,{B}_{j}}\right) .
\]
The proof is left to the reader. \( ▱ \)
Exercises:
3.1. Show that there is a canonical map \( \sigma : {\bigoplus }_{j}{A}_{j} \rightarrow \mathop{\prod }\limits_{j}{A}_{j} \) .
3.2. Show how a map from \( {\bigoplus }_{i = 1}^{m}{A}_{i} \) to \( {\bigoplus }_{j = 1}^{n}{B}_{j} \) may be represented by a matrix
\[
\Phi = \left( {\varphi }_{ij}\right)
\]
where \( {\varphi }_{ij} : {A}_{i} \rightarrow {B}_{j} \) . Show that, if we write the composite of \( \varphi : A \rightarrow B \) and \( \psi : B \rightarrow C \) as \( {\varphi \psi } \) (not \( {\psi \varphi } \) ), then the composite of
\[
\Phi = \left( {\varphi }_{ij}\right) : {\bigoplus }_{i = 1}^{m}{A}_{i} \rightarrow {\bigoplus }_{j = 1}^{n}{B}_{j}
\]
and
\[
\Psi = \left( {\psi }_{jk}\right) : {\bigoplus }_{j = 1}^{n}{B}_{j} \rightarrow {\bigoplus }_{k = 1}^{q}{C}_{k}
\]
is the matrix product \( {\Phi \Psi } \) .
3.3. Show that if, in (1.2). \( {\alpha }^{\prime } \) is an isomorphism, then the sequence
\[
0 \rightarrow A\xrightarrow[]{\{ \varepsilon ,\alpha \} }{A}^{\prime \prime } \oplus B\xrightarrow[]{\left\langle {\alpha }^{\prime \prime }, - {\varepsilon }^{\prime }\right\rangle }{B}^{\prime \prime } \rightarrow 0
\]
is exact. State and prove the converse.
3.4. Carry out a similar exercise to the one above, assuming \( {\alpha }^{\prime \prime } \) is an isomorphism.
3.5. Use the universal property of the direct sum to show that
\[
\left( {{A}_{1} \oplus {A}_{2}}\right) \oplus {A}_{3} \cong {A}_{1} \oplus \left( {{A}_{2} \oplus {A}_{3}}\right) .
\]
3.6. Show that \( {\mathbb{Z}}_{m} \oplus {\mathbb{Z}}_{n} = {\mathbb{Z}}_{mn} \) if and only if \( m \) and \( n \) are mutually prime.
3.7. Show that the following statements about the exact sequence
\[
0 \rightarrow {A}^{\prime }\overset{{\alpha }^{\prime }}{ \rightarrow }A\overset{{\alpha }^{\prime \prime }}{ \rightarrow }{A}^{\prime \prime } \rightarrow 0
\]
of \( \Lambda \) -modules are equivalent:
(i) there exists \( \mu : {A}^{\prime \prime } \rightarrow A \) with \( {\alpha }^{\prime \prime }\mu = 1 \) on \( {A}^{\prime \prime } \) ;
(ii) there exists \( \varepsilon : A \rightarrow {A}^{\prime } \) with \( \varepsilon {\alpha }^{\prime } = 1 \) on \( {A}^{\prime } \) ;
(iii) \( 0 \rightarrow {\operatorname{Hom}}_{A}\left( {B,{A}^{\prime }}\right) \overset{{\alpha }_{ * }^{\prime }}{ \rightarrow }{\operatorname{Hom}}_{A}\left( {B, A}\right) \overset{{\alpha }_{ * }^{\prime \prime }}{ \rightarrow }{\operatorname{Hom}}_{A}\left( {B,{A}^{\prime \prime }}\right) \rightarrow 0 \) is exact for all \( B \) ;
(iv) \( 0 \rightarrow {\operatorname{Hom}}_{A}\left( {{A}^{\prime \prime }, C}\right) \xrightarrow[]{{\alpha }^{\prime \prime } * }{\operatorname{Hom}}_{A}\left( {A, C}\right) \xrightarrow[]{{\alpha }^{\prime * } \rightarrow }{\operatorname{Hom}}_{A}\left( {{A}^{\prime }, C}\right) \rightarrow 0 \) is exact for all \( C \) ;
(v) there exists \( \mu : {A}^{\prime \prime } \rightarrow A \) such that \( \left\langle {{\alpha }^{\prime },\mu }\right\rangle : {A}^{\prime } \oplus {A}^{\prime \prime } \rightarrow A \) .
3.8. Show that if \( 0 \rightarrow {A}^{\prime }\overset{{\alpha }^{\prime }}{ \rightarrow }{A}^{{\alpha }^{\prime \prime }}{A}^{\prime \prime } \rightarrow 0 \) is pure and if \( {A}^{\prime \prime } \) is a direct sum of cyclic groups then statement (i) above holds (see Exercise 2.7).
## 4. Free and Projective Modules
Let \( A \) be a \( \Lambda \) -module and let \( S \) be a subset of \( A \) . We consider the set \( {A}_{0} \) of all elements \( a \in A \) of the form \( a = \mathop{\sum }\limits_{{s \in S}}{\lambda }_{s}s \) where \( {\lambda }_{s} \in \Lambda \) and \( {\lambda }_{s} \neq 0 \) for only a finite number of elements \( s \in S \) . It is trivially seen that \( {A}_{0} \) is a submodule of \( A \) ; hence it is the smallest submodule of \( A \) containing \( S \) .
If for the set \( S \) the submodule \( {A}_{0} \) is the whole of \( A \), we shall say that \( S \) is a set of generators of \( A \) . If \( A \) admits a finite set of generators it is said to be finitely generated. A set \( S \) of generators of \( A \) is called a basis of \( A \) if every element \( a \in A \) may be expressed uniquely in the form \( a = \mathop{\sum }\limits_{{s \in S}}{\lambda }_{s}s \)
with \( {\lambda }_{s} \in \Lambda \) and \( {\lambda }_{s} \neq 0 \) for only a finite number of elements \( s \in S \) . It is readily seen that a set \( S \) of generators is a basis if and only if it is linearly independent, that is, if \( \mathop{\sum }\limits_{{s \in S}}{\lambda }_{s}s = 0 \) implies \( {\lambda }_{s} = 0 \) for all \( s \in S \) . The reader
should note that not every module possesses a basis.
Definition. If \( S \) is a basis of the \( \Lambda \) -module \( P \), then \( P \) is called free on the set \( S \) . We shall call \( P \) free if it is free on some subset.
Proposition 4.1. Suppose the \( \Lambda \) -module \( P \) is free on the set \( S \) . Then \( P \cong {\bigoplus }_{s \in S}{\Lambda }_{s} \) where \( {\Lambda }_{s} = \Lambda \) as a left module for \( s \in S \) . Conversely, \( {\bigoplus }_{s \in S}{\Lambda }_{s} \) is free on the set \( \left\{ {{1}_{{A}_{s}}, s \in S}\right\} \) .
Proof. We define \( \varphi : P \rightarrow {\bigoplus }_{s \in S}{\Lambda }_{s} \) as follows: Every element \( a \in P \) is expressed uniquely in the form \( a = \mathop{\sum }\limits_{{s \in S}}{\lambda }_{s}s \) ; set \( \varphi \left( a\right) = {\left( {\lambda }_{s}\right) }_{s \in S} \) . Conversely, for \( s \in S \) define \( {\psi }_{s} : {\Lambda }_{s} \rightarrow P \) by \( {\psi }_{s}\left( {\lambda }_{s}\right) = {\lambda }_{s}s \) . By the universal property of the direct sum the family \( \left\{ {\psi }_{s}\right\}, s \in S \), gives rise to a map \( \psi = \left\langle {\psi }_{s}\right\rangle : {\bigoplus }_{s \in S}{\Lambda }_{s} \rightarrow P \) . It is readily seen that \( \varphi \) and \( \psi \) are inverse to each other. The remaining assertion immediately follows from the construction of the direct sum.
The next proposition yields a universal characterization of the free module on the set \( S \) .
Proposition 4.2. Let \( P \) be free on the set \( S \) . To every \( \Lambda \) -module \( M \) and to every function \( f \) from \( S \) into the set underlying \( M \), there is a unique A-module homomorphism \( \varphi : P \rightarrow M \) extending \( f \) .
Proof. Let \( f\left( s\right) = {m}_{s} \) . Set \( \varphi \left( a\right) = \varphi \left( {\mathop{\sum }\limits_{{s \in S}}{\lambda }_{s}s}\right) = \mathop{\sum }\limits_{{s \in S}}{\lambda }_{s}{m}_{s} \) . This obviously is the only homomorphism having the required property.
Proposition 4.3. Every A-module \( A \) is a quotient of a free module \( P \) .
Proof. Let \( S \) be a set of generators of \( A \) . Let \( P = {\bigoplus }_{s \in S}{\Lambda }_{s} \) with \( {\Lambda }_{s} = \Lambda \) and define \( \varphi : P \rightarrow A \) to be the extension of the function \( f \) given by \( f\left( {1}_{{\Lambda }_{s}}\right) = s \) . Trivially \( \varphi \) is surjective.
Proposition 4.4. Let \( P \) be a free \( \Lambda \) -module. To every surjective homomorphism \( \varepsilon : B \rightarrow C \) of \( \Lambda \) -modules and to every homomorphism \( \gamma : P \rightarrow C \) there exists a homomorphism \( \beta : P \rightarrow B \) such that \( {\varepsilon \beta } = \gamma \) .
Proof. Let \( P \) be free on \( S \) . Since \( \varepsilon \) is surjective we can find elements \( {b}_{s} \in B, s \in S \) with \( \varepsilon \left( {b}_{s}\right) = \gamma \left( s\right), s \in S \) . Define \( \beta \) as the extension of the function \( f : S \rightarrow B \) given by \( f\left( s\right) = {b}_{s}, s \in S \) . By the uniqueness part of Proposition 4.2 we conclude that \( {\varepsilon \beta } = \gamma \) .
To emphasize the importance of the property proved in Proposition 4.4 we make the following remark : Let \( A\overset{\mu }{ \mapsto }B\overset{\varepsilon }{ \rightarrow }C \) be a short exact sequence of \( \Lambda \) -modules. If \( P \) is a free \( \Lambda \) -module Proposition 4.4 asserts that every homomorphism \( \gamma : P \rightarrow C \) is induced by a homomorphism \( \beta : P \rightarrow B \) . Hence using Theorem 2.1 we can conclude that the induced sequence
\[
0 \rightarrow {\operatorname{Hom}}_{A}\left( {P,
|
Proposition 3.4. Let \( B \) be a \( \Lambda \) -module and \( \left\{ {A}_{j}\right\}, j \in J \) be a family of \( \Lambda \) - modules. Then there is an isomorphism
\[
\eta : {\operatorname{Hom}}_{A}\left( {{\bigoplus }_{j \in J}{A}_{j}, B}\right) \rightarrow \mathop{\prod }\limits_{{j \in J}}{\operatorname{Hom}}_{A}\left( {{A}_{j}, B}\right) .
\]
|
The proof reveals that this theorem is merely a restatement of the universal property of the direct sum. For \( \psi : {\bigoplus }_{j \in J}{A}_{j} \rightarrow B \), define \( \eta \left( \psi \right) = {\left( \psi {\iota }_{j} : {A}_{j} \rightarrow B\right) }_{j \in J} \) . Conversely, a family \( \left\{ {{\psi }_{j} : {A}_{j} \rightarrow B}\right\}, j \in J \), gives rise to a unique map \( \psi : {\bigoplus }_{j \in J}{A}_{j} \rightarrow B \) . The projections \( {\pi }_{j} : \mathop{\prod }\limits_{{j \in J}}{\operatorname{Hom}}_{A}\left( {{A}_{j}, B}\right) \) \( \rightarrow {\operatorname{Hom}}_{\Lambda }\left( {{A}_{j}, B}\right) \) are given by \( {\pi }_{j}\eta = {\operatorname{Hom}}_{\Lambda }\left( {{\iota }_{j}, B}\right) \) . Analogously one proves:
|
Proposition 15.36. Let the notation be as in Proposition 13.32. Then \( {\varepsilon }_{j}{\mathcal{X}}_{\infty } \) has no nonzero finite submodules.
Finally, we arrive at the primary goal of this section.
Proposition 15.37. Suppose \( {\varepsilon }_{i}X \) has characteristic polynomial \( f\left( T\right) \) . Then
\[
{\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {\mathop{\lim }\limits_{ \rightarrow }{\varepsilon }_{i}{A}_{n},{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}}\right)
\]
has characteristic polynomial \( f\left( {{\left( 1 + T\right) }^{-1} - 1}\right) \) and \( {\varepsilon }_{1 - i}\mathcal{X} \) has characteristic polynomial \( f\left( {\kappa {\left( 1 + T\right) }^{-1} - 1}\right) \) (where \( \kappa \in 1 + p{\mathbb{Z}}_{p} \) is defined by \( {\gamma }_{0}{\zeta }_{{p}^{n}} = {\zeta }_{{p}^{n}}^{\kappa } \) for all \( n \) ).
Proof. The first statement follows from Proposition 15.35 and the definition of the action of \( \Lambda \) on \( \widetilde{{\varepsilon }_{i}X} \) . For the second, note that if \( {\gamma }_{0} \) acts on \( {\varepsilon }_{i}X \) as \( \left( {1 + T}\right) \), then it acts on \( \widetilde{\left( {\varepsilon }_{i}X\right) }\left( 1\right) \simeq {\varepsilon }_{j}{\mathcal{X}}_{\infty } \) by \( \kappa {\left( 1 + T\right) }^{-1} \), from which the result follows.
## §15.6. Technical Results from Iwasawa Theory
In this section we prove some technical results from Iwasawa theory, following the treatment given in Rubin [7]. Proposition 15.38 will show that the cyclotomic units, the local units, and the class group are well behaved with respect to their \( \Lambda \) -structure. As usual, the global units and the global units modulo cyclotomic units are more troublesome; they will be treated in Propositions 15.40 and 15.42.
First, we review some notation:
\( p = \) an odd prime;
\( {\gamma }_{0} = \) a generator of \( \Gamma = \operatorname{Gal}\left( {\mathbb{Q}\left( {\zeta }_{{\mathbf{p}}^{\infty }}\right) /\mathbb{Q}\left( {\zeta }_{p}\right) }\right) \) ;
\( {P}_{n} = {\left( 1 + T\right) }^{{p}^{n}} - 1 = {\gamma }_{0}^{{p}^{n}} - 1 \) (under the identification \( {\gamma }_{0} = 1 + T \) );
\( {\Gamma }_{n} = \) the subgroup of \( \Gamma \) of index \( {p}^{n} \) ;
\( {M}^{{\Gamma }_{n}} = \left\{ {m \in M \mid {\gamma }_{0}^{{p}^{n}}m = m}\right\} = \operatorname{Ker}\left( {M\overset{{P}_{n}}{ \rightarrow }M}\right) \), where \( M \) is a \( \Lambda \) -module;
\( M/{P}_{n} = M/{P}_{n}M = \operatorname{Coker}\left( {M\overset{{P}_{n}}{ \rightarrow }M}\right) \)
\( \chi = {\omega }^{j} = \) a nontrivial even character of \( \operatorname{Gal}\left( {\mathbb{Q}\left( {\zeta }_{p}\right) /\mathbb{Q}}\right) \) .
In the literature, \( M/{P}_{n} \) is often denoted \( {M}_{{\Gamma }_{n}} \) . It is the maximal quotient of \( M \) on which \( {\Gamma }_{n} \) acts trivially.
Proposition 15.38. Let \( {\bar{C}}_{1}^{n},{X}_{n},{U}_{1}^{n} \), and \( {\mathcal{X}}_{n} \) be as in Section 15.4. Then
\[
{\varepsilon }_{\chi }{\bar{C}}_{1}^{\infty }/{P}_{n} \simeq {\varepsilon }_{\chi }{\bar{C}}_{1}^{n}
\]
\[
{\varepsilon }_{\chi }X/{P}_{n} \simeq {\varepsilon }_{\chi }{X}_{n}
\]
\[
{\varepsilon }_{\chi }{U}_{1}^{\infty }/{P}_{n} \simeq {\varepsilon }_{\chi }{U}_{1}^{n}
\]
\[
{\varepsilon }_{\chi }{\mathcal{X}}_{\infty }/{P}_{n} \simeq {\varepsilon }_{\chi }{\mathcal{X}}_{n}
\]
Proof. The result for \( {A}_{n} \simeq {X}_{n} \) follows from Proposition 13.22 and that for \( {\bar{C}}_{1}^{\infty } \) from Proposition 8.11, as in Section 13.8. Section 13.5 treats \( {\mathcal{X}}_{\infty } \) . The result for \( {U}_{1}^{\infty } \) is Proposition 13.54.
Lemma 15.39. Let \( 0 \rightarrow {M}_{1} \rightarrow {M}_{2} \rightarrow {M}_{3} \rightarrow 0 \) be an exact sequence of \( \Lambda \) - modules.
(a) \( \operatorname{Ker}\left( {{M}_{1}/{P}_{n} \rightarrow {M}_{2}/{P}_{n}}\right) \simeq {M}_{3}^{{\Gamma }_{n}}/\operatorname{Im}{M}_{2}^{{\Gamma }_{n}} \) .
(b) If \( {M}_{3} \) is a finitely generated \( \Lambda \) -module and \( {M}_{3}/{P}_{n} \) is finite, then \( {M}_{3}^{{\Gamma }_{n}} \) is finite.
Proof. Consider the diagram

where the vertical maps are multiplication by \( {P}_{n} = {\gamma }_{0}^{{p}^{n}} - 1 \) . Note that \( {M}_{i}^{{\Gamma }_{n}} = \) \( \operatorname{Ker}\left( {{M}_{i}\overset{{P}_{n}}{ \rightarrow }{M}_{i}}\right) \) . The Snake Lemma yields an exact sequence
\[
{M}_{2}^{{\Gamma }_{n}} \rightarrow {M}_{3}^{{\Gamma }_{n}} \rightarrow {M}_{1}/{P}_{n} \rightarrow {M}_{2}/{P}_{n}
\]
This proves (a). Now assume that \( {M}_{3}/{P}_{n} \) is finite. The exact sequence
\[
0 \rightarrow {M}_{3}^{{\Gamma }_{n}} \rightarrow {M}_{3}\overset{{P}_{n}}{ \rightarrow }{M}_{3} \rightarrow {M}_{3}/{P}_{n} \rightarrow 0
\]
implies that \( \operatorname{char}\left( {M}_{3}^{{\Gamma }_{n}}\right) = \operatorname{char}\left( {{M}_{3}/{P}_{n}}\right) = 1 \) (use Proposition 15.22; note that \( {M}_{3} \) is \( \Lambda \) -torsion since \( {M}_{3}/{P}_{n} \) is finite). Therefore \( {M}_{3}^{{\Gamma }_{n}} \) is finite, by Lemma 15.17.
Proposition 15.40. There is an ideal \( \mathfrak{A} \subseteq \Lambda \) of finite index such that, for all \( n \) , \( \mathfrak{A} \) annihilates the kernel and cokernel of the natural map \( {\varepsilon }_{\chi }{\bar{E}}_{1}/{P}_{n} \rightarrow {\varepsilon }_{\chi }{\bar{E}}_{1}^{n} \) . The orders of these kernels and cokernels are bounded independently of \( n \) .
Proof. From Corollary 13.6, Lemmas 15.16 and 15.39, and Proposition 15.38, we have a commutative diagram

The second and third vertical maps are isomorphisms by Proposition 15.38. An easy diagram chase shows that
\[
\operatorname{Ker}{\phi }_{1} = \operatorname{Ker}{\pi }_{1}
\]
Since \( {\varepsilon }_{\chi }X/{P}_{n} \simeq {\varepsilon }_{\chi }{X}_{n} \) is finite, Lemma 15.39 implies that \( {\varepsilon }_{\chi }{X}^{{\Gamma }_{n}} \) is finite. Let \( {\varepsilon }_{\chi }{X}_{\text{finite }} \) be the maximum finite \( \Lambda \) -submodule of \( {\varepsilon }_{\chi }X \) . Then \( {\varepsilon }_{\chi }{X}^{{\Gamma }_{n}} \subseteq {\varepsilon }_{\chi }{X}_{\text{finite }} \) . By Lemma 15.39, Ker \( {\phi }_{1} \) is a subquotient of \( {\varepsilon }_{\chi }{X}_{\text{finite }} \) and hence is of finite order bounded independently of \( n \) .
Now consider the commutative diagram

We have
\[
\operatorname{Ker}{\pi }_{2} \simeq \operatorname{Ker}{\phi }_{2}
\]
We claim that \( {\varepsilon }_{\chi }\left( {{U}_{1}^{\infty }/{\bar{E}}_{1}^{\infty }}\right) /{P}_{n} \) is finite. Assuming this, we find that \( \operatorname{Ker}{\phi }_{2} \) is a subquotient of \( {\left( {\varepsilon }_{\chi }{U}_{1}^{\infty }/{\bar{E}}_{1}^{\infty }\right) }_{\text{finite }} \) . The Snake Lemma (to apply it we should replace \( {\varepsilon }_{x}{\bar{E}}_{1}^{\infty } \) by its quotient by \( \operatorname{Ker}{\phi }_{2} \) ) implies that \( \operatorname{Ker}{\pi }_{1} \simeq \operatorname{Coker}{\pi }_{2} \) , hence Coker \( {\pi }_{2} \simeq \operatorname{Ker}{\phi }_{1} \), which is a subquotient of \( {\varepsilon }_{\chi }{X}_{\text{finite }} \) . Lemma 15.17 implies that there is an ideal \( \mathfrak{A} \subseteq \Lambda \) of finite index that annihilates
\[
{\varepsilon }_{\chi }{X}_{\text{finite }} \oplus {\left( {\varepsilon }_{\chi }{U}_{1}^{\infty }/{\bar{E}}_{1}^{\infty }\right) }_{\text{finite }}.
\]
Putting all the above together, we find that \( \mathfrak{U} \) annihilates \( \operatorname{Ker}{\pi }_{2} \oplus \operatorname{Coker}{\pi }_{2} \) , as desired.
To prove the claim, note that we have a surjection \( {\varepsilon }_{\chi }\left( {{U}_{1}^{\infty }/{\bar{C}}_{1}^{\infty }}\right) /{P}_{n} \rightarrow \) \( {\varepsilon }_{\chi }\left( {{U}_{1}^{\infty }/{\bar{E}}_{1}^{\infty }}\right) /{P}_{n} \) . By Theorem 13.56, \( {\varepsilon }_{\chi }{U}_{1}^{\infty }/{\bar{C}}_{1}^{\infty } \simeq \Lambda /\left( {f}_{\chi }\right) \), where \( {\widehat{f}}_{\chi } = f(\left( {1 + p}\right) \times \) \( \left( {1 + T}\right) {}^{-1} - 1,\chi ) \), and \( f\left( {T,\chi }\right) \) gives the \( p \) -adic \( L \) -function. Therefore
\[
{\varepsilon }_{\chi }\left( {{U}_{1}^{\infty }/{\bar{C}}_{1}^{\infty }}\right) /{P}_{n} \simeq \Lambda /\left( {{f}_{\chi },{P}_{n}}\right)
\]
The roots of \( {P}_{n} \) are \( {\zeta }_{{p}^{n}}^{j} - 1 \) with \( 0 \leq j < {p}^{n} \) . Theorem 7.10 says that
\[
f\left( {{\zeta }_{{p}^{n}}^{j}{\left( 1 + p\right) }^{s} - 1,\chi }\right) = {L}_{p}\left( {s,\chi {\psi }_{n}^{j}}\right)
\]
where \( {\zeta }_{{p}^{n}} = {\psi }_{n}\left( {1 + p}\right) \) is a primitive \( {p}^{n} \) th root of unity. Therefore
\[
{f}_{\chi }\left( {{\zeta }_{{p}^{n}}^{j} - 1}\right) = f\left( {{\zeta }_{{p}^{n}}^{-j}\left( {1 + p}\right) - 1,\chi }\right) = {L}_{p}\left( {1,\chi {\psi }_{n}^{-j}}\right) \neq 0
\]
by Corollary 5.30. Therefore \( {f}_{\chi } \) and \( {P}_{n} \) have no common roots. By Lemma \( {13.7},\Lambda /\left( {{f}_{x},{P}_{n}}\right) \) is finite, which yields the claim. This completes the proof of Proposition 15.40.
Lemma 15.41. There is an exact sequence
\[
0 \rightarrow {\varepsilon }_{\chi }{\bar{E}}_{1}^{\infty }\overset{\theta }{ \rightarrow }\Lambda \rightarrow \text{ finite } \rightarrow 0.
\]
Proof. We have \( {\varepsilon }_{\chi }{\bar{E}}_{1}^{\infty } \subseteq {\varepsilon }_{\chi }{U}_{1}^{\infty } \simeq \Lambda \) by Theorem 13.54. Since \( \Lambda \) is Noetherian, \( {\varepsilon }_{\gamma }{\bar{E}}_{1}^{\infty } \) is finitely generated and torsion-free. By Theorem 13.12, there is a pseudo-isomorphism \( {\varepsilon }_{\chi }
|
Proposition 15.37. Suppose \( {\varepsilon }_{i}X \) has characteristic polynomial \( f\left( T\right) \) . Then
\[
{\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {\mathop{\lim }\limits_{ \rightarrow }{\varepsilon }_{i}{A}_{n},{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}}\right)
\]
has characteristic polynomial \( f\left( {{\left( 1 + T\right) }^{-1} - 1}\right) \) and \( {\varepsilon }_{1 - i}\mathcal{X} \) has characteristic polynomial \( f\left( {\kappa {\left( 1 + T\right) }^{-1} - 1}\right) \) (where \( \kappa \in 1 + p{\mathbb{Z}}_{p} \) is defined by \( {\gamma }_{0}{\zeta }_{{p}^{n}} = {\zeta }_{{p}^{n}}^{\kappa } \) for all \( n \) ).
|
The first statement follows from Proposition 15.35 and the definition of the action of \( \Lambda \) on \( \widetilde{{\varepsilon }_{i}X} \) . For the second, note that if \( {\gamma }_{0} \) acts on \( {\varepsilon }_{i}X \) as \( \left( {1 + T}\right) \), then it acts on \( \widetilde{\left( {\varepsilon }_{i}X\right) }\left( 1\right) \simeq {\varepsilon }_{j}{\mathcal{X}}_{\infty } \) by \( \kappa {\left( 1 + T\right) }^{-1} \), from which the result follows.
|
Theorem 10.5. Let \( D \) be a domain in \( \mathbb{C} \) and suppose that \( \left\{ {f}_{n}\right\} \) is a sequence in \( \mathbf{H}\left( D\right) \) with \( {f}_{n} \) not identically 0 for all \( n \) .
(a) If \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {1 - {f}_{n}}\right| \) converges uniformly on compact subsets of \( D \), then \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{f}_{n} \)
converges uniformly on compact subsets of \( D \) to a function \( f \) in \( \mathbf{H}\left( D\right) \) and
\[
{v}_{z}\left( f\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{v}_{z}\left( {f}_{n}\right) \text{ for all }z \in D.
\]
b) If \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {1 - {f}_{n}}\right| \) diverges pointwise on \( D \), then \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{f}_{n} \) converges to 0 on \( D \) .
Proof. For part (a), we only have to verify the formula for the order of \( z \) . We note that the sum in that formula is finite (i.e., all but finitely many summands are zero). Let \( {z}_{0} \in D \) and let \( K \subset D \) be a compact set containing a neighborhood of \( {z}_{0} \) . There is an \( N \) in \( {\mathbb{Z}}_{ > 0} \) such that \( \left| {1 - {f}_{n}\left( z\right) }\right| < \frac{1}{2} \) for all \( z \in K \) and all \( n \geq N \) . Therefore, \( {f}_{n}\left( z\right) \neq 0 \) for all \( z \in K \) and for all \( n \geq \widetilde{N} \) . Thus
\[
{v}_{{z}_{0}}\left( f\right) = {v}_{{z}_{0}}\left( {\mathop{\prod }\limits_{{n = 1}}^{{N - 1}}{f}_{n}}\right) + {v}_{{z}_{0}}\left( {\mathop{\prod }\limits_{{n = N}}^{\infty }{f}_{n}}\right) = \mathop{\sum }\limits_{{n = 1}}^{{N - 1}}{v}_{{z}_{0}}\left( {f}_{n}\right) + 0.
\]
Part (b) is easily verified.
## 10.2 Holomorphic Functions with Prescribed Zeros
Our goal is to construct a holomorphic function with arbitrarily prescribed zeros (at a discrete set of points in any given domain). To this end we begin by defining the elementary functions, first introduced by Weierstrass. We investigate some of their properties and then use them along with Theorem 10.5 to construct the required holomorphic functions.
Definition 10.6. The Weierstrass elementary functions are the entire functions \( {E}_{p} \) , for \( p \in {\mathbb{Z}}_{ \geq 0} \), defined as follows. Let \( z \in \mathbb{C} \) and set
\[
{E}_{0}\left( z\right) = 1 - z
\]
and, for \( p \in {\mathbb{Z}}_{ > 0} \) ,
\[
{E}_{p}\left( z\right) = \left( {1 - z}\right) \exp \left( {z + \frac{{z}^{2}}{2} + \cdots + \frac{{z}^{p}}{p}}\right) .
\]
Note that, for all nonnegative integers \( p,{E}_{p}\left( 0\right) = 1 \) and \( {E}_{p}\left( z\right) = 0 \) if and only if \( z = 1 \) . Furthermore, the unique zero of \( {E}_{p} \) is simple.
Lemma 10.7. If \( \left| z\right| \leq 1 \), then \( \left| {1 - {E}_{p}\left( z\right) }\right| \leq {\left| z\right| }^{p + 1} \) for all nonnegative integers \( p \) .
Proof. The statement is clearly true if \( p = 0 \) .
If \( p \geq 1 \), we have
\[
{E}_{p}^{\prime }\left( z\right) = \left( {1 - z}\right) {\mathrm{e}}^{z + \frac{{z}^{2}}{2} + \cdots + \frac{{z}^{p}}{p}}\left\lbrack {1 + z + \cdots + {z}^{p - 1}}\right\rbrack - {\mathrm{e}}^{z + \frac{{z}^{2}}{2} + \cdots + \frac{{z}^{p}}{p}}
\]
\[
= - {z}^{p}{\mathrm{e}}^{z + \frac{{z}^{2}}{2} + \cdots + \frac{{z}^{p}}{p}}.
\]
We therefore conclude that \( {v}_{0}\left( {-{E}_{p}^{\prime }}\right) = p \) . Further,
\[
- {E}_{p}^{\prime }\left( z\right) = {z}^{p}{\mathrm{e}}^{z + \frac{{z}^{2}}{2} + \cdots + \frac{{z}^{p}}{p}} = {z}^{p}\mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{n!}{\left( z + \frac{{z}^{2}}{2} + \cdots + \frac{{z}^{p}}{p}\right) }^{n} = \mathop{\sum }\limits_{{n \geq p}}{b}_{n}{z}^{n},
\]
with \( {b}_{p} = 1 \) and \( {b}_{n} > 0 \) for all \( n \geq p \) . Therefore
\[
1 - {E}_{p}\left( z\right) = \mathop{\sum }\limits_{{n \geq p}}\frac{{b}_{n}}{n + 1}{z}^{n + 1}.
\]
Set
\[
\phi \left( z\right) = \frac{1 - {E}_{p}\left( z\right) }{{z}^{p + 1}}
\]
and observe that \( \phi \in \mathbf{H}\left( \mathbb{C}\right) \) and that \( \phi \left( z\right) = \mathop{\sum }\limits_{{n \geq 0}}{a}_{n}{z}^{n} \), with \( {a}_{n} > 0 \) for all \( n \in {\mathbb{Z}}_{ \geq 0} \) .
For \( \left| z\right| \leq 1 \), we have
\[
\left| {\phi \left( z\right) }\right| = \left| {\mathop{\sum }\limits_{{n \geq 0}}{a}_{n}{z}^{n}}\right| \leq \mathop{\sum }\limits_{{n \geq 0}}{a}_{n}\left| {z}^{n}\right| \leq \mathop{\sum }\limits_{{n \geq 0}}{a}_{n} = \phi \left( 1\right) = 1
\]
thus \( \left| {1 - {E}_{p}\left( z\right) }\right| \leq {\left| z\right| }^{p + 1} \) for \( \left| z\right| \leq 1 \) .
Theorem 10.8 (Weierstrass Theorem). Assume that \( \left\{ {z}_{n}\right\} \) is a sequence of nonzero complex numbers with \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| {z}_{n}\right| = \infty \) .
If \( \left\{ {p}_{n}\right\} \subseteq {\mathbb{Z}}_{ \geq 0} \) is a sequence of nonnegative integers with the property that for all positive real numbers \( r \) we have
\[
\mathop{\sum }\limits_{{n = 1}}^{\infty }{\left( \frac{r}{\left| {z}_{n}\right| }\right) }^{1 + {p}_{n}} < \infty
\]
(10.1)
then the infinite product
\[
P\left( z\right) = \mathop{\prod }\limits_{{n = 1}}^{\infty }{E}_{{p}_{n}}\left( \frac{z}{{z}_{n}}\right), z \in \mathbb{C}
\]
defines an entire function whose zero set is \( \left\{ {{z}_{1},{z}_{2},\ldots }\right\} \) . More precisely, if \( z = c \) appears \( v \geq 0 \) times in the above sequence of zeros, then \( {v}_{c}\left( P\right) = v \) .
Furthermore, condition (10.1) is always satisfied for \( {p}_{n} = n - 1 \) . Thus any discrete set in \( \mathbb{C} \) is the zero set of an entire function.
Proof. We first show that (10.1) holds for \( {p}_{n} = n - 1 \) . In this case we have to show convergence of the series \( \sum {a}_{n} \), with \( {a}_{n} = {\left( \frac{r}{\left| {z}_{n}\right| }\right) }^{n} \) . But \( {\left| {a}_{n}\right| }^{\frac{1}{n}} \rightarrow 0 \) as \( n \rightarrow \infty \) , and the root test allows us to conclude that
\[
\mathop{\sum }\limits_{{n = 1}}^{\infty }{\left( \frac{r}{\left| {z}_{n}\right| }\right) }^{n} < \infty
\]
Now let \( \left\{ {p}_{n}\right\} \) be any sequence of nonnegative integers satisfying condition (10.1) for all \( r > 0 \) ; fix \( r > 0 \) and assume that \( \left| z\right| \leq r \) . From Lemma 10.7 we conclude
that
\[
\left| {1 - {E}_{{p}_{n}}\left( \frac{z}{{z}_{n}}\right) }\right| \leq {\left| \frac{z}{{z}_{n}}\right| }^{{p}_{n} + 1} \leq {\left( \frac{r}{\left| {z}_{n}\right| }\right) }^{{p}_{n} + 1}.
\]
Therefore we can apply Theorem 10.5 to conclude that \( \prod {E}_{{p}_{n}}\left( \frac{z}{{z}_{n}}\right) \) converges uniformly on all compact subsets of \( \mathbb{C} \) to an entire function that has the required zero set.
We next prove a generalization of two consequences of the fundamental theorem of algebra, which has been proven previously. The first of these algebraic consequences is that for every finite sequence \( \left\{ {{z}_{1},\ldots ,{z}_{n}}\right\} \) of points in the complex plane (that may contain repeated points) there is a polynomial vanishing precisely at the points of that sequence. The second is that every nonzero complex polynomial \( p \) has a factorization
\[
p\left( z\right) = c\mathop{\prod }\limits_{{j = 1}}^{n}\left( {z - {z}_{j}}\right) \text{ for all }z \in \mathbb{C},
\]
where \( c \) is a nonzero constant and \( \left\{ {z}_{j}\right\} \) are the zeros of \( p \), repeated according to their multiplicity. We need the analytic tools that were developed to handle infinite sequences.
Theorem 10.9 (Weierstrass Factorization Theorem). Let \( f \) be in \( \mathbf{H}\left( \mathbb{C}\right) - \{ 0\} \) , and set \( k = {v}_{0}\left( f\right) \) . Let \( \left\{ {{z}_{n};n \in I}\right\} \) denote the zeros of \( f \) in \( \mathbb{C} - \{ 0\} \), listed according to their multiplicities.
There exist a \( g \in \mathbf{H}\left( \mathbb{C}\right) \) and a sequence of nonnegative integers \( \left\{ {{p}_{n};n \in I}\right\} \) such
that
\[
f\left( z\right) = {z}^{k}{\mathrm{e}}^{g\left( z\right) }\mathop{\prod }\limits_{{n \in I}}{E}_{{p}_{n}}\left( \frac{z}{{z}_{n}}\right)
\]
for all \( z \) in \( \mathbb{C} \) .
Proof. Observe that \( I \subseteq \mathbb{N} \) may be finite (including the possibility that \( I \) is empty) or countable. In the finite case the theorem has already been established. In any case, we can choose any sequence \( \left\{ {{p}_{n};n \in I}\right\} \) of nonnegative integers such that (10.1) holds for all \( r > 0 \) and set
\[
P\left( z\right) = \mathop{\prod }\limits_{{n \in I}}{E}_{{p}_{n}}\left( \frac{z}{{z}_{n}}\right) \text{ and }G\left( z\right) = \frac{f\left( z\right) }{{z}^{k}P\left( z\right) }.
\]
Then \( G \in \mathbf{H}\left( \mathbb{C}\right) \) and \( G\left( z\right) \neq 0 \) for all \( z \in \mathbb{C} \) . Since \( G \) is a nonvanishing entire function, there is a \( g \in \mathbf{H}\left( \mathbb{C}\right) \) with \( {\mathrm{e}}^{g\left( z\right) } = G\left( z\right) \) for all \( z \in \mathbb{C} \) .
Theorem 10.10. Let \( D \) be a proper subdomain of \( \widehat{\mathbb{C}} \) . Let \( A \) be a subset of \( D \) that has no limit point in \( D \), and let \( v \) be a function mapping \( A \) to \( {\mathbb{Z}}_{ > 0} \) . Then there exists a function \( f \in \mathbf{H}\left( D\right) \) with \( {v}_{z}\left( f\right) = v\left( z\right) \) for all \( z \in A \), whose restriction to \( D - A \) has no zeros.
Proof. To begin, we make the following observations:
1. \( A \) is either finite or countable.
2. Without loss of generality, we may assume that \( \infty \in D - A \) and that \( A \) is nonempty.
3. If \( A \) is finite, let \( A = \left\{ {{z}_{1},\ldots ,{z}_{n}}\right\} \) . Set \( {v}_{j} = v\left( {z}_{j}\right) \), for all \( 1 \leq j \leq n \), and choose \( {z}_{0} \in \mathbb{C} - D \) . In this case we set
\[
f\left( z\right
|
(Theorem 10.5. Let \( D \) be a domain in \( \mathbb{C} \) and suppose that \( \left\{ {f}_{n}\right\} \) is a sequence in \( \mathbf{H}\left( D\right) \) with \( {f}_{n} \) not identically 0 for all \( n \) .
(a) If \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {1 - {f}_{n}}\right| \) converges uniformly on compact subsets of \( D \), then \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{f}_{n} \)
converges uniformly on compact subsets of \( D \) to a function \( f \) in \( \mathbf{H}\left( D\right) \) and
\[
{v}_{z}\left( f\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{v}_{z}\left( {f}_{n}\right) \text{ for all }z \in D.
\]
b) If \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {1 - {f}_{n}}\right| \) diverges pointwise on \( D \), then \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{f}_{n} \) converges to 0 on \( D \) .)
|
(For part (a), we only have to verify the formula for the order of \( z \) . We note that the sum in that formula is finite (i.e., all but finitely many summands are zero). Let \( {z}_{0} \in D \) and let \( K \subset D \) be a compact set containing a neighborhood of \( {z}_{0} \) . There is an \( N \) in \( {\mathbb{Z}}_{ > 0} \) such that \( \left| {1 - {f}_{n}\left( z\right) }\right| < \frac{1}{2} \) for all \( z \in K \) and all \( n \geq N \) . Therefore, \( {f}_{n}\left( z\right) \neq 0 \) for all \( z \in K \) and for all \( n \geq \widetilde{N} \) . Thus
\[
{v}_{{z}_{0}}\left( f\right) = {v}_{{z}_{0}}\left( {\mathop{\prod }\limits_{{n = 1}}^{{N - 1}}{f}_{n}}\right) + {v}_{{z}_{0}}\left( {\mathop{\prod }\limits_{{n = N}}^{\infty }{f}_{n}}\right) = \mathop{\sum }\limits_{{n = 1}}^{{N - 1}}{v}_{{z}_{0}}\left( {f}_{n}\right) + 0.
\]
Part (b) is easily verified.)
|
Proposition 12.6. For a group extension \( G\overset{\kappa }{ \rightarrow }E\overset{\rho }{ \rightarrow }Q \) the following conditions are equivalent:
(1) There exists a homomorphism \( \mu : Q \rightarrow E \) such that \( \rho \circ \mu = {1}_{Q} \) .
(2) There is a cross-section of \( E \) relative to which \( {s}_{a, b} = 1 \) for all \( a, b \in Q \) .
(3) \( E \) is equivalent to a semidirect product of \( G \) by \( Q \) .
(4) Relative to any cross-section of \( E \) there exists a mapping \( u : a \mapsto {u}_{a} \) of
\( Q \) into \( E \) such that \( {u}_{1} = 1 \) and \( {s}_{a, b} = {}^{a}{u}_{b}{u}_{a}{u}_{ab}^{-1} \) for all \( a, b \in Q \) .
A group extension splits when it satisfies these conditions.
Proof. (1) implies (2). If (1) holds, then \( {p}_{a} = \mu \left( a\right) \) is a cross-section of \( E \) , relative to which \( {s}_{a, b} = 1 \) for all \( a, b \), since \( \mu \left( a\right) \mu \left( b\right) = \mu \left( {ab}\right) \) .
(2) implies (3). If \( {s}_{a, b} = 1 \) for all \( a, b \), then \( \varphi : Q \rightarrow \operatorname{Aut}\left( G\right) \) is a homomorphism, by \( \left( A\right) \), and \( \left( M\right) \) shows that \( E\left( {s,\varphi }\right) = G{ \rtimes }_{\varphi }Q \) . Then \( E \) is equivalent to \( E\left( {s,\varphi }\right) \), by Schreier’s theorem.
(3) implies (4). A semidirect product \( G{ \rtimes }_{\psi }Q \) of \( G \) by \( Q \) is a group extension \( E\left( {t,\psi }\right) \) in which \( {t}_{a, b} = 1 \) for all \( a, b \) . If \( E \) is equivalent to \( G{ \rtimes }_{\psi }Q \), then, relative to any cross-section of \( E, E\left( {s,\varphi }\right) \) and \( E\left( {t,\psi }\right) \) are equivalent, and \( \left( E\right) \) yields \( {s}_{a, b} = {u}_{a}{}_{\psi }^{a}{u}_{b}{t}_{a, b}{u}_{ab}^{-1} = {}_{\varphi }^{a}{u}_{b}{u}_{a}{u}_{ab}^{-1} \) for all \( a, b \in Q \) .
(4) implies (1). If \( {s}_{a, b} = {}^{a}{u}_{b}{u}_{a}{u}_{ab}^{-1} \) for all \( a, b \in Q \), then \( {u}_{a}^{-1}{}^{a}\left( {u}_{b}^{-1}\right) {s}_{a, b} \) \( = {u}_{ab}^{-1} \) and \( \mu : a \mapsto \kappa \left( {u}_{a}^{-1}\right) {p}_{a} \) is a homomorphism, since
\[
\mu \left( a\right) \mu \left( b\right) = \kappa \left( {{u}_{a}^{-1}{}^{a}\left( {u}_{b}^{-1}\right) {s}_{a, b}}\right) {p}_{ab} = \kappa \left( {u}_{ab}^{-1}\right) {p}_{ab} = \mu \left( {ab}\right) .▱
\]
Extensions of abelian groups. Schreier’s theorem becomes much nicer if \( G \) is abelian. Then \( \left( A\right) \) implies \( {}^{a}\left( {{}^{b}x}\right) = {}^{ab}x \) for all \( a, b, x \), so that the set action of \( Q \) on \( G \) is a group action. Equivalently, \( \varphi : Q \rightarrow \operatorname{Aut}\left( G\right) \) is a homomorphism. Theorem 12.4 then simplifies as follows.
Corollary 12.7. Let \( G \) be an abelian group, let \( Q \) be a group, let \( s : Q \times Q \rightarrow \) \( G \) be a mapping, and let \( \varphi : Q \rightarrow \operatorname{Aut}\left( G\right) \) be a homomorphism, such that
\[
{s}_{a,1} = 1 = {s}_{1, a}\text{ and }{s}_{a, b}{s}_{{ab}, c} = {}^{a}{s}_{b, c}{s}_{a,{bc}}
\]
for all \( a, b, c \in Q \) . Then \( E\left( {s,\varphi }\right) = G \times Q \) with multiplication \( \left( M\right) \), injection \( x \mapsto \left( {x,1}\right) \), and projection \( \left( {x, a}\right) \mapsto a \) is a group extension of \( G \) by \( Q \) . Conversely, every group extension \( E \) of \( G \) by \( Q \) is equivalent to some \( E\left( {s,\varphi }\right) \) .
If \( G \) is abelian, then condition \( \left( E\right) \) implies \( {}_{\varphi }^{a}x = {}_{\psi }^{a}x \) for all \( a \) and \( x \), so that \( \varphi = \psi \) . Thus, equivalent extensions share the same action, and Proposition 12.5 simplifies as follows.
Corollary 12.8. If \( G \) is abelian, then \( E\left( {s,\varphi }\right) \) and \( E\left( {t,\psi }\right) \) are equivalent if and only if \( \varphi = \psi \) and there exists a mapping \( u : a \mapsto {u}_{a} \) of \( Q \) into \( G \) such that
\[
{u}_{1} = 1\text{ and }{s}_{a, b} = {u}_{a}{}^{a}{u}_{b}{u}_{ab}^{-1}{t}_{a, b}\text{ for all }a, b \in Q.
\]
Corollaries 12.7 and 12.8 yield an abelian group whose elements are essentially the equivalence classes of group extensions of \( G \) by \( Q \) with a given action \( \varphi \) . Two factor sets \( s \) and \( t \) are equivalent when condition \( \left( E\right) \) holds. If \( G \) is abelian, then factor sets can be multiplied pointwise: \( {\left( s \cdot t\right) }_{a, b} = {s}_{a, b}{t}_{a, b} \), and the result is again a factor set, by 12.7. Under pointwise multiplication, factor sets \( s : Q \times Q \rightarrow G \) then constitute an abelian group \( {Z}_{\varphi }^{2}\left( {Q, G}\right) \) . Split factor sets (factor sets \( {s}_{a, b} = {u}_{a}{}^{a}{u}_{b}{u}_{ab}^{-1} \) with \( {u}_{1} = 1 \) ) constitute a subgroup \( {B}_{\varphi }^{2}\left( {Q, G}\right) \) of \( {Z}_{\varphi }^{2}\left( {Q, G}\right) \) . By 12.8, two factor sets are equivalent if and only if they lie in the same coset of \( {B}_{\varphi }^{2}\left( {Q, G}\right) \) ; hence equivalence classes of factor sets constitute an abelian group \( {H}_{\varphi }^{2}\left( {Q, G}\right) = {Z}_{\varphi }^{2}\left( {Q, G}\right) /{B}_{\varphi }^{2}\left( {Q, G}\right) \), the second cohomology group of \( Q \) with coefficients in \( G \) . (The cohomology of groups is defined in full generality in Section XII.7; it has become a major tool of group theory.)
The abelian group \( {H}_{\varphi }^{2}\left( {Q, G}\right) \) classifies extensions of \( G \) by \( Q \), meaning that there is a one-to-one correspondence between elements of \( {H}_{\varphi }^{2}\left( {Q, G}\right) \) and equivalence classes of extensions of \( G \) by \( Q \) with the action \( \varphi \) . ’ (These equivalence classes would constitute an abelian group if they were sets and could be allowed to belong to sets.)
Hölder's Theorem. As a first application of Schreier's theorem we find all extensions of one cyclic group by another.
Theorem 12.9 (Hölder). A group \( G \) is an extension of a cyclic group of order \( m \) by a cyclic group of order \( n \) if and only if \( G \) is generated by two elements \( a \) and \( b \) such that \( a \) has order \( m,{b}^{n} = {a}^{t},{b}^{i} \notin \langle a\rangle \) when \( 0 < i < n \), and \( {ba}{b}^{-1} = {a}^{r} \), where \( {r}^{n} \equiv 1 \) and \( {rt} \equiv t\left( {\;\operatorname{mod}\;m}\right) \) . Such a group exists for every choice of integers \( r, t \) with these properties.
Proof. First let \( G = \langle a, b\rangle \), where \( a \) has order \( m,{b}^{n} = {a}^{t},{b}^{i} \notin \langle a\rangle \) when \( 0 < i < n \), and \( {ba}{b}^{-1} = {a}^{r} \), where \( {r}^{n} \equiv 1 \) and \( {rt} \equiv 1\left( {\;\operatorname{mod}\;m}\right) \) . Then \( A = \langle a\rangle \) is cyclic of order \( m \) . Since \( b \) has finite order, every element of \( G \) is a product of \( a \) ’s and \( b \) ’s, and it follows from \( {ba}{b}^{-1} = {a}^{r} \) that \( A \leqq G \) . Then \( G/A \) is generated by \( {Ab} \) ; since \( {b}^{n} \in A \) but \( {b}^{i} \notin A \) when \( 0 < i < n,{Ab} \) has order \( n \) in \( G/A \), and \( G/A \) is cyclic of order \( n \) . Thus \( G \) is an extension of a cyclic group of order \( m \) by a cyclic group of order \( n \) .
Conversely, assume that \( G \) is an extension of a cyclic group of order \( m \) by a cyclic group of order \( n \) . Then \( G \) has a normal subgroup \( A \) that is cyclic of order \( m \), such that \( G/A \) is cyclic of order \( n \) . Let \( A = \langle a\rangle \) and \( G/A = \langle {Ab}\rangle \) , where \( a, b \in G \) . The elements of \( G/A \) are \( A,{Ab},\ldots, A{b}^{n - 1} \) ; therefore \( G \) is generated by \( a \) and \( b \) . Moreover, \( a \) has order \( m,{b}^{n} = {a}^{t} \) for some \( t \) , \( {b}^{i} \notin \langle a\rangle \) when \( 0 < i < n \), and \( {ba}{b}^{-1} = {a}^{r} \) for some \( r \), since \( A \leqq G \) . Then \( {a}^{rt} = b{a}^{t}{b}^{-1} = b{b}^{n}{b}^{-1} = {a}^{t} \) and \( {rt} \equiv t\left( {\;\operatorname{mod}\;m}\right) \) . Also \( {b}^{2}a{b}^{-2} = b{a}^{r}{b}^{-1} = \) \( {\left( {a}^{r}\right) }^{r} = {a}^{{r}^{2}} \) and, by induction, \( {b}^{k}a{b}^{-k} = {a}^{{r}^{k}} \) ; hence \( a = {b}^{n}a{b}^{-n} = {a}^{{r}^{n}} \) and \( {r}^{n} \equiv 1\left( {\;\operatorname{mod}\;m}\right) \) .
In the above, \( 1, b,\ldots ,{b}^{n - 1} \) is a cross-section of \( G \) . The corresponding action is \( {}^{A{b}^{j}}{a}^{i} = {b}^{j}{a}^{i}{b}^{-j} = {a}^{i{r}^{j}} \) . If \( 0 \leqq i, j < n \), then \( {b}^{j}{b}^{k} = {b}^{j + k} \) if \( j + k < n \) , \( {b}^{j}{b}^{k} = {a}^{t}{b}^{j + k - n} \) if \( j + k \geqq n \) ; this yields the corresponding factor set. This suggests a construction of \( G \) for any suitable \( m, n, r, t \) .
Assume that \( m, n > 0,{r}^{n} \equiv 1 \), and \( {rt} \equiv t\left( {\;\operatorname{mod}\;m}\right) \) . Let \( A = \langle a\rangle \) be cyclic of order \( m \) and let \( C = \langle c\rangle \) be cyclic of order \( n \) . Since \( {r}^{n} \equiv 1\left( {\;\operatorname{mod}\;m}\right), r \) and \( m \) are relatively prime and \( \alpha : {a}^{i} \mapsto {a}^{ir} \) is an automorphism of \( A \) . Also, \( {\alpha }^{j}\left( {a}^{i}\right) = {a}^{i{r}^{j}} \) for all \( j \) ; in particular, \( {\alpha }^{n}\left( {a}^{i}\right) = {a}^{i{r}^{n}} = {a}^{i} \) . Hence \( {\alpha }^{n} = {1}_{A} \) and there is a homomorphism \( \varphi : C \rightarrow \) Aut \( \left( A\right) \) such that \( \varphi \left( c\right) = \alpha \) . The action of \( C \) on \( A \), written \( {}^{j}{a}^{i} = {}_{\varphi }^{{c}^{j}}{a}^{i} \), is \( {}^{j}{a}^{i} = {\alpha }^{j}\left( {a}^{i}\right) = {a}^{i{r}^{j}} \) .
Define \( s : C \times C \rightarrow A \) as follows: for all \( 0 \leqq j, k < n \) ,
\[
{s}_{j, k} = {s}_{{c}^{j},{c}^{k}} = \left\
|
Proposition 12.6. For a group extension \( G\overset{\kappa }{ \rightarrow }E\overset{\rho }{ \rightarrow }Q \) the following conditions are equivalent:
(1) There exists a homomorphism \( \mu : Q \rightarrow E \) such that \( \rho \circ \mu = {1}_{Q} \) .
(2) There is a cross-section of \( E \) relative to which \( {s}_{a, b} = 1 \) for all \( a, b \in Q \) .
(3) \( E \) is equivalent to a semidirect product of \( G \) by \( Q \) .
(4) Relative to any cross-section of \( E \) there exists a mapping \( u : a \mapsto {u}_{a} \) of
\( Q \) into \( E \) such that \( {u}_{1} = 1 \) and \( {s}_{a, b} = {}^{a}{u}_{b}{u}_{a}{u}_{ab}^{-1} \) for all \( a, b \in Q \) .
A group extension splits when it satisfies these conditions.
|
(1) implies (2). If (1) holds, then \( {p}_{a} = \mu \left( a\right) \) is a cross-section of \( E \) , relative to which \( {s}_{a, b} = 1 \) for all \( a, b \), since \( \mu \left( a\right) \mu \left( b\right) = \mu \left( {ab}\right) \) .
(2) implies (3). If \( {s}_{a, b} = 1 \) for all \( a, b \), then \( \varphi : Q \rightarrow \operatorname{Aut}\left( G\right) \) is a homomorphism, by \( \left( A\right) \), and \( \left( M\right) \) shows that \( E\left( {s,\varphi }\right) = G{ \rtimes }_{\varphi }Q \) . Then \( E \) is equivalent to \( E\left( {s,\varphi }\right) \), by Schreier’s theorem.
(3) implies (4). A semidirect product \( G{ \rtimes }_{\psi }Q \) of \( G \) by \( Q \) is a group extension \( E\left( {t,\psi }\right) \) in which \( {t}_{a, b} = 1 \) for all \( a, b \) . If \( E \) is equivalent to \( G{ \rtimes }_{\psi }Q \), then, relative to any cross-section of \( E, E\left( {s,\varphi }\right) \) and \( E\left( {t,\psi }\right) \) are equivalent, and \( \left( E\right) \) yields \( {s}_{a, b} = {u}_{a}{}_{\psi }^{a}{u}_{b}{t}_{a, b}{u}_{ab}^{-1} = {}_{\varphi }^{a}{u}_{b}{u}_{a}{u}_{ab}^{-1} \) for all \( a, b \in Q \) .
(4) implies (1). If \( {s}_{a, b} = {}^{a}{u}_{b}{u}_{a}{u}_{ab}^{-1} \) for all \( a, b \in Q \), then \( {u}_{a}^{-1}{}^{a}\left( {u}_{b}^{-1}\right) {s}_{a, b} = {u}_{ab}^{-1} \) and \(\mu : a \mapsto \(\kappa \(\left({u}_
|
Corollary 5.14. Let \( E \) and \( F \) be affine spaces, and \( A : E \rightarrow F \) a multivalued map whose graph \( \operatorname{gr}\left( A\right) = C \) is a nonempty convex set in \( E \times F \) . Then
\[
\operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) \subseteq \operatorname{raidom}\left( A\right) .
\]
If \( \operatorname{rai}A\left( x\right) \neq \varnothing \) for all \( x \in \operatorname{raidom}\left( A\right) \), then
\[
\operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) = \operatorname{raidom}\left( A\right) .
\]
In particular, the above equality holds when \( F \) has finite dimension.
Proof. The inclusion follows immediately from Lemma 5.13. If \( x \in \operatorname{rai}\operatorname{dom}\left( A\right) \) and \( y \in \operatorname{rai}A\left( x\right) \neq \varnothing \), then Lemma 5.13 implies that \( \left( {x, y}\right) \in \operatorname{raigr}\left( A\right) \), and we have \( x \in \operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) \) . If \( x \in \operatorname{rai}\operatorname{dom}\left( A\right) \) and \( F \) is finite-dimensional, then Lemma 5.3 implies that \( \operatorname{rai}A\left( x\right) \neq \varnothing \) .
Corollary 5.15. Let \( {C}_{i} \) be a convex set in an affine space \( {A}_{i}, i = 1,\ldots, k \) . If each \( \operatorname{rai}\left( {C}_{i}\right) \) is nonempty, then
\[
\operatorname{rai}\left( {{C}_{1} \times {C}_{2} \times \cdots \times {C}_{k}}\right) = \operatorname{rai}\left( {C}_{1}\right) \times \operatorname{rai}\left( {C}_{2}\right) \times \cdots \times \operatorname{rai}\left( {C}_{k}\right) .
\]
Proof. The proof is trivial for \( k = 1 \) and follows immediately from Lemma 5.13 for \( k = 2 \) . The proof is easily completed by induction on \( k \) .
It is also possible to give an independent easy proof of the corollary from scratch.
Lemma 5.16. Let \( A : E \rightarrow F \) be a multivalued affine map between two affine spaces \( E \) and \( F \), and \( C \subseteq E \) a convex set such that \( \operatorname{rai}\left( C\right) \cap \operatorname{dom}\left( A\right) \neq \varnothing \) .
Then we always have
\[
A\left( {\operatorname{rai}\left( C\right) }\right) \subseteq \operatorname{rai}\left( {A\left( C\right) }\right) .
\]
Moreover, if \( E \) is finite-dimensional, or more generally if \( \operatorname{rai}\left( {C \cap {A}^{-1}\left( y\right) }\right) \neq \) \( \varnothing \) for all \( y \in \operatorname{rai}A\left( C\right) \), then
\[
A\left( {\operatorname{rai}\left( C\right) }\right) = \operatorname{rai}\left( {A\left( C\right) }\right)
\]
Proof. Consider the multivalued map \( B : F \rightarrow E \) whose graph is the convex set
\[
\operatorname{gr}\left( B\right) \mathrel{\text{:=}} \operatorname{gr}\left( A\right) \cap \left( {C \times F}\right)
\]
and note that
\[
\operatorname{dom}\left( B\right) = \{ y : \exists x \in C, y \in A\left( x\right) \} = A\left( C\right)
\]
and
\[
B\left( y\right) = C \cap {A}^{-1}\left( y\right) \;\text{ for }y \in \operatorname{dom}\left( B\right) .
\]
We have
\[
\operatorname{rai}\left( {\operatorname{gr}\left( B\right) }\right) = \operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) \cap \left( {\operatorname{rai}\left( {C \times F}\right) }\right) = \operatorname{gr}\left( A\right) \cap \left( {\operatorname{rai}\left( C\right) \times F}\right) \neq \varnothing ,
\]
where the last relation follows from the assumption \( \operatorname{rai}\left( C\right) \cap \operatorname{dom}\left( A\right) \neq \varnothing \), and the second equation from the equality \( \operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) = \operatorname{gr}\left( A\right) \) and Corollary 5.15. Then the first equality follows from Lemma 5.10. Consequently, we have
\[
\operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( B\right) }\right) = \{ y : \exists x \in \operatorname{rai}\left( C\right), y \in A\left( x\right) \} = A\left( {\operatorname{rai}\left( C\right) }\right) .
\]
With these preparations, the lemma follows immediately from Corollary 5.14.
## 5.4 Topological Interior and Topological Closure of Convex Sets
In this section we compare the algebraic and topological concepts of interior, relative interior, and closure for convex sets. As Theorem 5.20 and Corollary 5.21 show, the algebraic and topological concepts agree to a remarkable degree.
Let us recall some basic topological notions; see [232] for a quick introduction to general topology, and \( \left\lbrack {{161},{45},{46}}\right\rbrack \) for comprehensive treatments. Let \( X \) be a set and \( \mathcal{T} \) a set of subsets of \( X \) . Then \( \left( {X,\mathcal{T}}\right) \) is called a topological space if \( \varnothing \in \mathcal{T}, X \in \mathcal{T} \), and \( \mathcal{T} \) is closed under unions and finite intersections, that is, any union of sets in \( \mathcal{T} \) is in \( \mathcal{T} \), and the intersection of two sets in \( \mathcal{T} \) is in \( \mathcal{T} \) . The sets in \( \mathcal{T} \) are the open sets of the topological space \( \left( {X,\mathcal{T}}\right) \) . A set \( F \subseteq X \) is called closed if \( X \smallsetminus F \) is open. A neighborhood of a point \( x \in X \) is a set \( V \subseteq X \) that contains an open set \( U \in \mathcal{T} \) such that \( x \in U \subseteq V \) . The interior of a set \( A \) in \( X \), denoted by \( \operatorname{int}\left( A\right) \), is the set of points \( x \in A \) such that \( x \) has a neighborhood that lies entirely in \( A \) . The closure of a set \( A \subseteq X \), denoted by \( \bar{A} \), is the intersection of all the closed sets containing \( A \) . Alternatively, a point \( x \in \bar{A} \) if and only if every neighborhood of \( x \) intersects \( A \) .
If \( Y \) is a subset of \( X \), then \( \left( {Y,\mathcal{S}}\right) \) inherits the relative topology from \( \left( {X,\mathcal{T}}\right) \) : the open sets in \( \mathcal{S} \) are simply the sets of the form \( U \cap Y \), where \( U \in \mathcal{T} \) .
A real vector space \( E \) is called a topological vector space if there exists a topology \( \mathcal{T} \) on \( E \) such that the linear operations \( \left( {x, y}\right) \mapsto x + y \) and \( \left( {\alpha, x}\right) \mapsto {\alpha x} \) are continuous maps from the product topological spaces \( E \times E \) and \( \mathbb{R} \times E \) to \( E \), respectively. We refer the reader to any book on functional analysis, for example \( \left\lbrack {{177},{233},{44}}\right\rbrack \), for more details.
Let \( \left( {E,\mathcal{T}}\right) \) be a topological vector space, and \( A \subset E \) an affine subset of \( E \) . Then \( \left( {A,\mathcal{S}}\right) ,\mathcal{S} \) the relative topology inherited from \( \mathcal{T} \), is called a topological affine space.
Definition 5.17. Let \( C \subseteq A \) be a convex set in a topological affine space \( A \) . The relative interior of \( C \), denoted by \( \operatorname{ri}\left( C\right) \), is the interior of \( C \) in the relative topology of the affine space \( \operatorname{aff}\left( C\right) \) .
If the topology on \( A \) is given by a norm, for example, we have
\[
\operatorname{ri}\left( C\right) \mathrel{\text{:=}} \left\{ {x \in C : \exists \varepsilon > 0,{B}_{\varepsilon }\left( x\right) \cap \operatorname{aff}\left( C\right) \subseteq C}\right\} .
\]
Lemma 5.18. Let \( C \) be a convex set in a topological affine space \( A \) with a nonempty interior. If \( x \in \operatorname{int}\left( C\right) \) and \( y \in \bar{C} \), then \( \lbrack x, y) \subseteq \operatorname{int}\left( C\right) \) . Consequently, \( \operatorname{int}\left( C\right) \) is a convex set.
Moreover, \( \operatorname{int}\left( C\right) \subseteq \operatorname{ai}\left( C\right) \) .
Proof. Let \( z \mathrel{\text{:=}} y + t\left( {x - y}\right), t \in \left( {0,1}\right) \) . We claim that \( z \in \operatorname{int}\left( C\right) \) . Let \( U \subset C \) be a neighborhood of \( x \) ; see Figure 5.3. Since \( y = \left( {z - {tx}}\right) /\left( {1 - t}\right) \in \) \( \left( {z - {tU}}\right) /\left( {1 - t}\right) = : V \) and \( v \mapsto \left( {z - {tv}}\right) /\left( {1 - t}\right) \) is a homeomorphism, \( V \) is a neighborhood of \( y \) . Pick \( p \in C \cap V \), and define \( u \in U \) by the equation \( p = \left( {z - {tu}}\right) /\left( {1 - t}\right) \), that is, \( z = p + t\left( {u - p}\right) \) . Then \( z \) lies in the open set \( p + t\left( {U - p}\right) \), which is a subset of \( C \) by the convexity of \( C \) . This proves the claim.
The convexity of \( \operatorname{int}\left( C\right) \) follows from this: if \( x, y \in \operatorname{int}\left( C\right) \), then \( \lbrack x, y) \subseteq \) \( \operatorname{int}\left( C\right) \) . Since \( y \in \operatorname{int}\left( C\right) \) as well, the whole segment \( \left\lbrack {x, y}\right\rbrack \) lies in \( \operatorname{int}\left( C\right) \) .
Let \( x \in \operatorname{int}\left( C\right) \) such that \( x \in U \subseteq C \), where \( U \) is a neighborhood of \( x \) . If \( u \in A \), then the map \( t \mapsto x + t\left( {u - x}\right) \) is continuous; thus there exists \( \delta > 0 \) such that \( x + t\left( {u - x}\right) \in U \) for all \( \left| t\right| \leq \delta \) . This proves that \( x \in \operatorname{ai}\left( C\right) \) .
Lemma 5.19. Let \( C \) be a nonempty convex set in a topological affine space A. Then \( \bar{C} \) is a convex set, and \( \operatorname{ac}\left( C\right) \subseteq \bar{C} \) .
Proof. Let \( x, y \in \bar{C} \) and \( z \mathrel{\text{:=}} y + t\left( {x - y}\right) = {tx} + \left( {1 - t}\right) y, t \in \left( {0,1}\right) \) . We claim that \( z \in \bar{C} \) . Let \( {U}_{z} \) be a neighborhood of \( z \) . Since the map \( \left( {u, v}\right) \map
|
Corollary 5.14. Let \( E \) and \( F \) be affine spaces, and \( A : E \rightarrow F \) a multivalued map whose graph \( \operatorname{gr}\left( A\right) = C \) is a nonempty convex set in \( E \times F \) . Then
\[
\operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) \subseteq \operatorname{raidom}\left( A\right) .
\]
If \( \operatorname{rai}A\left( x\right) \neq \varnothing \) for all \( x \in \operatorname{raidom}\left( A\right) \), then
\[
\operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) = \operatorname{raidom}\left( A\right) .
\]
In particular, the above equality holds when \( F \) has finite dimension.
|
The inclusion follows immediately from Lemma 5.13. If \( x \in \operatorname{rai}\operatorname{dom}\left( A\right) \) and \( y \in \operatorname{rai}A\left( x\right) \neq \varnothing \), then Lemma 5.13 implies that \( \left( {x, y}\right) \in \operatorname{raigr}\left( A\right) \), and we have \( x \in \operatorname{dom}\operatorname{rai}\left( {\operatorname{gr}\left( A\right) }\right) \) . If \( x \in \operatorname{rai}\operatorname{dom}\left( A\right) \) and \( F \) is finite-dimensional, then Lemma 5.3 implies that \( \operatorname{rai}A\left( x\right) \neq \varnothing \) .
|
Lemma 11.3.2. Let \( p \) be a prime number and \( \alpha \) an algebraic number. The following conditions are equivalent:
(1) \( \alpha \) is p-integral.
(2) For any embedding \( \sigma \) of \( \overline{\mathbb{Q}} \) into \( {\mathbb{C}}_{p} \) we have \( \left| {\sigma \left( \alpha \right) }\right| \leq 1 \) ; in other words, \( \sigma \left( \alpha \right) \) is p-integral as a p-adic number.
(3) If we fix an embedding of \( \overline{\mathbb{Q}} \) into \( {\mathbb{C}}_{p} \), then all the conjugates of \( \alpha \) are p-integral as p-adic numbers.
Proof. Clear and left to the reader (Exercise 7).
Next, let \( {\chi }_{1} \) and \( {\chi }_{2} \) be two primitive Dirichlet characters, hence with values in \( \overline{\mathbb{Q}} \) (considered as a subfield of \( \mathbb{C} \) or of \( {\mathbb{C}}_{p} \) ; it does not matter here). We define the character \( {\chi }_{1}{\chi }_{2} \) to be the primitive character equivalent to the character \( {\chi }_{1}\left( a\right) {\chi }_{2}\left( a\right) \) . It is clear that the conductor of \( {\chi }_{1}{\chi }_{2} \) divides the LCM of the conductors of \( {\chi }_{1} \) and \( {\chi }_{2} \) . In addition, we have the following:
Lemma 11.3.3. If either \( {\chi }_{1}\left( a\right) \neq 0 \) or \( {\chi }_{2}\left( a\right) \neq 0 \) we have \( {\chi }_{1}{\chi }_{2}\left( a\right) = \) \( {\chi }_{1}\left( a\right) {\chi }_{2}\left( a\right) \) .
Proof. If \( {\chi }_{1}\left( a\right) \neq 0 \) and \( {\chi }_{2}\left( a\right) \neq 0 \) we have by definition \( \left( {{\chi }_{1}{\chi }_{2}}\right) \left( a\right) = \) \( {\chi }_{1}\left( a\right) {\chi }_{2}\left( a\right) \) . If exactly one of them is nonzero, say \( {\chi }_{1}\left( a\right) \neq 0 \) and \( {\chi }_{2}\left( a\right) = 0 \) , then since \( {\chi }_{2} \) is primitive we have
\[
0 = {\chi }_{2}\left( a\right) = \left( {\left( {{\chi }_{1}{\chi }_{2}}\right) {\chi }_{1}^{-1}}\right) \left( a\right) = \left( {{\chi }_{1}{\chi }_{2}}\right) \left( a\right) {\chi }_{1}^{-1}\left( a\right) ,
\]
so that \( \left( {{\chi }_{1}{\chi }_{2}}\right) \left( a\right) = 0 = {\chi }_{1}\left( a\right) {\chi }_{2}\left( a\right) \), as claimed.
## 11.3.2 Definition and Basic Properties of \( p \) -adic \( L \) -Functions
Since the Hurwitz zeta function is the building block of Dirichlet \( L \) -functions it is now easy to define \( p \) -adic \( L \) -functions. This is essentially due to Kubota-Leopoldt, and I loosely follow the presentation given in Washington's book [Was]. Note, however, that the modern way of giving the definitions and proofs is through the use of \( p \) -adic measures, but to stay in the spirit of this book (and of the author!) I have avoided doing so. See for instance the paper of Colmez [Colm] for an introduction to the subject using \( p \) -adic measures.
By Proposition 10.2.5 we know that if \( \chi \) is a (not necessarily primitive) character modulo \( f \) then as a complex function we have
\[
L\left( {\chi, s}\right) = {f}^{-s}\mathop{\sum }\limits_{{1 \leq a \leq f}}\chi \left( a\right) \zeta \left( {s, a/f}\right) .
\]
This leads to the following.
Definition 11.3.4. Let \( \chi \) be a primitive character of conductor \( f \) . For \( s \in \) \( {\mathbb{C}}_{p} \) such that \( \left| s\right| < {R}_{p} \) and \( s \neq 1 \), we define
\[
{L}_{p}\left( {\chi, s}\right) = \frac{\langle f{\rangle }^{1 - s}}{f}\mathop{\sum }\limits_{{0 \leq a < f}}\chi \left( a\right) {\zeta }_{p}\left( {s,\frac{a}{f}}\right) .
\]
We define \( {L}_{p}\left( {\chi ,1}\right) = \mathop{\lim }\limits_{{s \rightarrow 1}}{L}_{p}\left( {\chi, s}\right) \) when the limit exists. In particular, if \( \chi \) is the trivial character \( {\chi }_{0} \) we set \( {\zeta }_{p}\left( s\right) = {L}_{p}\left( {{\chi }_{0}, s}\right) = {\zeta }_{p}\left( {s,0}\right) \), and call this function the Kubota-Leopoldt p-adic zeta function.
Remarks. (1) It is important to note that this definition uses the function \( {\zeta }_{p}\left( {s, x}\right) \) both for \( x \in {\mathrm{{CZ}}}_{p} \) and for \( x \in {\mathbb{Z}}_{p} \) : indeed, when \( p \nmid f \) then \( a/f \in {\mathbb{Z}}_{p} \), so the function that occurs is the function \( {\zeta }_{p}\left( {{\chi }_{0}, s, x}\right) \) defined in Definition 11.2.12. On the other hand, when \( p \mid f \) then \( {q}_{p} \mid f \) (since the conductor of a character cannot be congruent to 2 modulo 4). Furthermore, \( \chi \left( a\right) \neq 0 \) only when \( p \nmid a \), so in that case \( a/f \in {\mathbb{{CZ}}}_{p} \) and the function that occurs is the initial function \( {\zeta }_{p}\left( {s, x}\right) \) . The above uniform formula is an additional reason to use the same notation for \( {\zeta }_{p}\left( {s, x}\right) \) when \( x \in {\mathbb{Z}}_{p} \) and \( x \in {\mathrm{{CZ}}}_{p} \) .
(2) Note that we sum from \( a = 0 \) to \( f - 1 \) instead of from 1 to \( f \) in the complex case, where it is essential since \( \zeta \left( {s,0}\right) \) is not defined. Here it makes no difference since we can have \( \chi \left( 0\right) = \chi \left( f\right) \neq 0 \) only for \( \chi = {\chi }_{0} \) , and by Proposition 11.2.20 we have
\[
{\zeta }_{p}\left( {\chi, s,1}\right) = {\zeta }_{p}\left( {\chi, s,0}\right) - \chi {\omega }^{-1}\left( 0\right) = {\zeta }_{p}\left( {\chi, s,0}\right)
\]
when \( \chi \neq \omega \), and in particular when \( \chi = {\chi }_{0} \) . It makes the computations slightly more elegant.
(3) When \( f = {p}^{v} \) with \( v \geq {v}_{p}\left( {q}_{p}\right) \), it is clear from Corollary 11.2.14 applied to \( M = f \) that \( {L}_{p}\left( {\chi, s}\right) = {\zeta }_{p}\left( {\chi, s,0}\right) \), so that the above definition indeed generalizes to arbitrary characters the definition that we have already given in Proposition 11.2.20. Since \( {\zeta }_{p}\left( {\chi, s, x}\right) \) has a Volkenborn integral definition, for future reference we note the following result.
Proposition 11.3.5. If \( \chi \) is defined modulo \( {p}^{v} \) for some \( v \geq 1 \) we have
\[
{L}_{p}\left( {\chi, s}\right) = \frac{1}{s - 1}{\int }_{{\mathbb{Z}}_{p}}\chi \left( t\right) \langle t{\rangle }^{1 - s}{dt}.
\]
To state the next proposition, it is useful to introduce the following notation.
Definition 11.3.6. (1) Let \( m \in {\mathbb{Z}}_{ > 0} \) . We define \( {\chi }_{0, m} \) to be the trivial character modulo 1 when \( p \nmid m \), and to be the trivial character modulo \( p \) when \( p \mid m \) . In other words, \( {\chi }_{0, m}\left( a\right) = 1 \) when \( p \nmid a \) or when \( p \mid a \) but \( p \nmid m \) , and \( {\chi }_{0, m}\left( a\right) = 0 \) when \( p \mid a \) and \( p \mid m \) .
(2) If \( I \subset \mathbb{Z} \), we set
\[
\mathop{\sum }\limits_{{a \in I}}^{\left( p\right) }g\left( a\right) = \mathop{\sum }\limits_{\substack{{a \in I} \\ {p \nmid a} }}g\left( a\right) \;\text{ and similarly }\;\mathop{\prod }\limits_{{a \in I}}g\left( a\right) = \mathop{\prod }\limits_{\substack{{a \in I} \\ {p \nmid a} }}g\left( a\right) .
\]
In particular, if \( p \mid m \) we have
\[
\mathop{\sum }\limits_{{0 \leq a < m}}g\left( a\right) = \mathop{\sum }\limits_{{0 \leq a < m}}{\chi }_{0, m}\left( a\right) g\left( a\right) .
\]
Note that the condition in (2) is \( p \nmid a \), and not \( p \nmid g\left( a\right) \) . In certain circumstances it will be essential to have the condition \( p \nmid g\left( a\right) \) instead, and in that case it will be written explicitly. Note also the following.
Lemma 11.3.7. Let \( \chi \) be a nontrivial primitive character of conductor \( f \) , and let \( m \) be a common multiple of \( f \) and \( p \) . Then
\[
\mathop{\sum }\limits_{{0 \leq a < m}}^{\left( p\right) }\chi \left( a\right) = 0.
\]
Proof. By multiplicativity we have
\[
\mathop{\sum }\limits_{{0 \leq a < m}}\chi \left( a\right) = \mathop{\sum }\limits_{{0 \leq a < m}}\chi \left( a\right) - \chi \left( p\right) \mathop{\sum }\limits_{{0 \leq b < m/p}}\chi \left( b\right) .
\]
Since \( \chi \) is nontrivial and \( f \mid m \) the first sum is zero. If \( p \mid f \) we have \( \chi \left( p\right) = 0 \) . On the other hand, if \( p \nmid f \) we have \( {fp} \mid m \), in other words \( f \mid m/p \), so the second sum is zero.
Proposition 11.3.8. Let \( \chi \) be a primitive character of conductor \( f \), let \( m \in \) \( {\mathbb{Z}}_{ > 0} \) be a multiple of \( f \), and let \( s \in {\mathbb{C}}_{p} \) be such that \( \left| s\right| < {R}_{p} \) and \( s \neq 1 \) .
(1) We have
\[
{L}_{p}\left( {\chi, s}\right) = \frac{\langle m{\rangle }^{1 - s}}{m}\mathop{\sum }\limits_{{0 \leq a < m}}{\chi }_{0, m}\left( a\right) \chi \left( a\right) {\zeta }_{p}\left( {s,\frac{a}{m}}\right) .
\]
(2) If, in addition, \( {q}_{p} \mid m \) we have
\[
{L}_{p}\left( {\chi, s}\right) = \frac{1}{s - 1}\mathop{\sum }\limits_{{0 \leq a < m}}^{\left( p\right) }\chi \left( a\right) \langle a{\rangle }^{1 - s}\mathop{\sum }\limits_{{j \geq 0}}\left( \begin{matrix} 1 - s \\ j \end{matrix}\right) \frac{{m}^{j - 1}}{{a}^{j}}{B}_{j}.
\]
(3) If \( \chi \neq {\chi }_{0} \) then \( {L}_{p}\left( {\chi ,1}\right) \) does indeed exist and is given by the formula
\[
{L}_{p}\left( {\chi ,1}\right) = \mathop{\sum }\limits_{{0 \leq a < m}}\chi \left( a\right) \left( {-\frac{{\log }_{p}\left( {\langle a\rangle }\right) }{m} + \mathop{\sum }\limits_{{j \geq 1}}{\left( -1\right) }^{j}\frac{{m}^{j - 1}}{{a}^{j}}\frac{{B}_{j}}{j}}\right) ,
\]
where \( m \in {\mathbb{Z}}_{ > 0} \) is any common multiple of \( f \) and \( {q}_{p} \) .
Proof. (1). Writing \( a = {kf} + r \) we have
\[
\frac{\langle m{\rangle }^{1 - s}}{m}\mathop{\sum }\limits_{{0 \leq a < m}}{\chi }_{0, m}\left( a\right) \chi \left( a\right) {\zeta }_{p}\left( {s,\frac{a}{m}}\right)
\]
\[
= \frac{\langle m{\rangle }^{1 - s}}{m}\mathop{\sum }\limits_{{0 \leq r < f}}\chi \left( r\right) \mathop{\sum }\limits_{{0 \l
|
Lemma 11.3.2. Let \( p \) be a prime number and \( \alpha \) an algebraic number. The following conditions are equivalent:
(1) \( \alpha \) is p-integral.
(2) For any embedding \( \sigma \) of \( \overline{\mathbb{Q}} \) into \( {\mathbb{C}}_{p} \) we have \( \left| {\sigma \left( \alpha \right) }\right| \leq 1 \) ; in other words, \( \sigma \left( \alpha \right) \) is p-integral as a p-adic number.
(3) If we fix an embedding of \( \overline{\mathbb{Q}} \) into \( {\mathbb{C}}_{p} \), then all the conjugates of \( \alpha \) are p-integral as p-adic numbers.
|
null
|
Corollary 3.34. Suppose \( X \) is a locally convex space over \( \mathbb{R} \) or \( \mathbb{C} \) . Then the topology of \( X \) is given by a directed family of seminorms. This family can be chosen to be countable if \( X \) is first countable.
Proof. \( {\mathcal{B}}_{1} \) exists by Proposition 3.1.
By the way, Reed and Simon [29] define a locally convex space this way.
What does one do if the family is not directed? There is a standard construction that goes as follows, if \( {\mathcal{F}}_{0} \) is any family of seminorms.
1. If \( {\mathcal{F}}_{0} \) is finite, set \( \mathcal{F} = \left\{ {\mathop{\sum }\limits_{{p \in {\mathcal{F}}_{0}}}p}\right\} \) .
2. If \( {\mathcal{F}}_{0} \) is countably infinite, write \( {\mathcal{F}}_{0} = \left\{ {{p}_{1},{p}_{2},{p}_{3},\ldots }\right\} \), and set
\[
\mathcal{F} = \left\{ {\mathop{\sum }\limits_{{j = 1}}^{n}{p}_{j} : n = 1,2,\ldots }\right\}
\]
\[
= \left\{ {{p}_{1},{p}_{1} + {p}_{2},{p}_{1} + {p}_{2} + {p}_{3},\ldots }\right\} \text{.}
\]
3. If \( {\mathcal{F}}_{0} \) is uncountable, set
\[
\mathcal{F} = \left\{ {\mathop{\sum }\limits_{{p \in F}}p : F\text{ is a finite subset of }{\mathcal{F}}_{0}}\right\} .
\]
Suppose \( {x}_{\alpha } \rightarrow x \) in the \( \mathcal{F} \) -topology, where \( \left\langle {x}_{\alpha }\right\rangle \) is a net, and \( \mathcal{F} \) is defined above. Then \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) in \( \mathbb{R} \) for all \( p \in \mathcal{F} \) since each \( p \in \mathcal{F} \) is continuous. Hence \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in {\mathcal{F}}_{0} \) by squeezing. On the other hand, if \( \left\langle {x}_{\alpha }\right\rangle \) is a net in \( X \) , and \( x \in X \), and \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in {\mathcal{F}}_{0} \), then \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in \mathcal{F} \) (finite sums), so that for all \( n \in \mathbb{N} \), there exists \( \beta \) such that \( \alpha \succ \beta \Rightarrow p\left( {{x}_{\alpha } - x}\right) < \) \( {2}^{-n} \), that is \( {x}_{\alpha } \in x + B\left( {p,{2}^{-n}}\right) \) . That is, \( {x}_{\alpha } \rightarrow x \) in the topology induced by \( \mathcal{F} \) if and only if \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in {\mathcal{F}}_{0} \) . In particular, the convergent nets [and thus the topology, by Proposition 1.3(a)] does not depend on the ordering of the seminorms in Case 2 above.
Now suppose \( \mathcal{F} = \left\{ {{p}_{1},{p}_{2},\ldots }\right\} \) is a countable (ascending) sequence of seminorms on \( X \) . For \( x, y \in X \), set
\[
d\left( {x, y}\right) = \mathop{\sum }\limits_{{j = 1}}^{\infty }{2}^{-j}\frac{{p}_{j}\left( {x - y}\right) }{1 + {p}_{j}\left( {x - y}\right) }.
\]
For the usual reasons, this defines a metric on \( X \) provided \( \mathcal{F} \) is separating, that is \( x \neq 0 \Rightarrow p\left( x\right) > 0 \) for some \( p \in \mathcal{F} \) . The triangle inequality holds because \( a, b \geq 0 \) gives
\[
{\int }_{b}^{a + b}\frac{dx}{{\left( 1 + x\right) }^{2}} = {\int }_{0}^{a}\frac{dx}{{\left( 1 + b + x\right) }^{2}} \leq {\int }_{0}^{a}\frac{dx}{{\left( 1 + x\right) }^{2}},
\]
\[
\text{that is}\frac{a + b}{1 + a + b} - \frac{b}{1 + b} \leq \frac{a}{1 + a}\text{.}
\]
This metric is translation invariant as well. Finally, note that if \( {x}_{n} \rightarrow x \) in the metric topology, then every \( {p}_{j}\left( {{x}_{n} - x}\right) \rightarrow 0 \) (squeezing), while if \( {x}_{n} \rightarrow x \) in the \( \mathcal{F} \) -topology, then every \( {p}_{j}\left( {{x}_{n} - x}\right) \rightarrow 0 \), so that \( d\left( {{x}_{n}, x}\right) \rightarrow 0 \) by the Lebesgue dominated convergence theorem for integrals (i.e., sums) over the positive integers (dominating function \( {2}^{-j} \) ). Thus, the metric gives the \( \mathcal{F} \) -topology. We have (nearly) proved:
Theorem 3.35. Suppose \( X \) is a Hausdorff locally convex space. Then the following are equivalent:
(i) \( X \) is first countable.
(ii) \( X \) is metrizable.
(iii) The topology of \( X \) is given by a translation invariant metric.
(iv) The topology of \( X \) is given by a countable family of seminorms.
Proof. The earlier discussion gives (iv) \( \Rightarrow \) (iii). The implications (iii) \( \Rightarrow \) (ii) and (ii) \( \Rightarrow \) (i) are direct, while (i) \( \Rightarrow \) (iv) comes from Corollary 3.34.
Next, a few words about completeness. A Hausdorff locally convex space (for that matter, a Hausdorff topological vector space) \( X \) is called complete (respectively, sequentially complete) if the additive topological group \( \left( {X, + }\right) \) is complete (respectively, sequentially complete) as a topological group. Completeness and sequential completeness for subsets also refers to \( \left( {X, + }\right) \) as a topological group.
Corollary 3.36. Suppose \( X \) is a Hausdorff locally convex space. Then the following are equivalent:
(i) \( X \) is first countable and complete.
(ii) \( X \) is metrizable and complete.
(iii) The topology of \( X \) is given by a complete, translation invariant metric.
(iv) \( X \) is complete, and the topology of \( X \) is given by a countable family of seminorms.
Proof. Thanks to Theorem 3.35, the only issue is the variation in "completeness" in condition (iii). Sequences are all we need to consider, thanks to Theorem 1.34.
The idea is this: A sequence \( \left\langle {x}_{n}\right\rangle \) is Cauchy in the locally convex topology exactly when we can force \( d\left( {{x}_{n} - {x}_{m},0}\right) < \varepsilon \) by requiring both \( n \) and \( m \) to be large. But
\[
d\left( {{x}_{n} - {x}_{m},0}\right) = d\left( {{x}_{n} - {x}_{m} + {x}_{m},0 + {x}_{m}}\right) = d\left( {{x}_{n},{x}_{m}}\right)
\]
since \( d \) is translation invariant. That is, \( d \) and \( \left( {X, + }\right) \) have the same Cauchy sequences (as well as the same convergent sequences), so if one is complete, then so is the other.
Definition 3.37. A Fréchet space is a Hausdorff locally convex space satisfying any (hence all) of conditions (i)-(iv) in Corollary 3.36.
By the way, for historical reasons (mainly Bourbaki [5]), a Fréchet space is usually defined using condition (ii). When reading condition (ii), keep in mind that "complete" really refers to \( X \) as a locally convex space, not to the metric appearing in "metrizable." It is only for translation invariant metrics that one can identify metric-Cauchy sequences with topological group-Cauchy sequences.
Examples of Fréchet Spaces
I. \( \mathbb{R}\left\lbrack \left\lbrack x\right\rbrack \right\rbrack \) and \( \mathbb{C}\left\lbrack \left\lbrack x\right\rbrack \right\rbrack \) . (Formal power series.) The \( n \) th seminorm of \( \sum {a}_{n}{x}^{n} \) is \( \left| {a}_{n}\right| \) , or \( \mathop{\sum }\limits_{{i = 0}}^{n}\left| {a}_{j}\right| \) once these are transformed into a directed set. This is one of the simplest, yet it illustrates a complication with the earlier constructions. The metric gives
\[
d\left( {\sum {a}_{n}{x}^{n},0}\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{2}^{-n}\frac{\mathop{\sum }\limits_{{j = 0}}^{n}\left| {a}_{j}\right| }{1 + \mathop{\sum }\limits_{{j = 0}}^{n}\left| {a}_{j}\right| }.
\]
With this metric, the ball of radius \( r \) need not be convex!
Example. \( r = {1.4}, f\left( x\right) = 2 \), and \( g\left( x\right) = {16x} \)
\[
d\left( {2,0}\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{2}^{-n}\frac{2}{3} = \frac{4}{3} < {1.4}
\]
\[
d\left( {{16x},0}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}\frac{16}{17} = \frac{16}{17} < {1.4}
\]
\[
d\left( {1 + {8x},0}\right) = \frac{1}{2} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n} \cdot \frac{9}{10} = {1.4}
\]
There is a way to get around this-replace all those sums earlier in this section with maxima. Rudin [32] does this. The cost is that some arguments become complicated due to the unavailability of convergence theorems for sums (i.e., integrals) over the positive integers.
II. \( C\left( H\right) \), the continuous functions on a locally compact, \( \sigma \) -compact Hausdorff space. \( H \) can be written as
\[
H = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{K}_{n}
\]
where each \( {K}_{n} \) is compact, and each \( {K}_{n} \subset \operatorname{int}\left( {K}_{n + 1}\right) \) . Set
\[
{p}_{n}\left( f\right) = \max \left| {f\left( {K}_{n}\right) }\right| .
\]
Fréchet space convergence is uniform convergence on compact sets. III. \( \mathcal{H}\left( U\right) \), the space of holomorphic functions on a region \( U \subset \mathbb{C} \) . The topology here comes from \( C\left( U\right) \), via Example II.
IV. \( {C}^{\infty }\left( {\mathbb{R}}^{n}\right) \), the space of \( {C}^{\infty } \) functions on \( {\mathbb{R}}^{n} \) .
\[
{p}_{n}\left( f\right) = \max \left\{ {\left| {\frac{{\partial }^{\left| I\right| }f}{\partial {x}^{I}}\left( x\right) }\right| : \parallel x\parallel \leq n,\left| I\right| \leq n}\right\}
\]
( \( I = \left( {i,\ldots ,{i}_{n}}\right) \) and \( \left| I\right| = {i}_{1} + \cdots + {i}_{n} \) comes from standard multiindex notation.) This example can be expanded to a \( {C}^{\infty } \) manifold which is \( \sigma \) -compact.
V. \( \mathcal{S}\left( \mathbb{R}\right) \), the Schwartz space of rapidly decreasing functions on \( \mathbb{R} \) . The \( n \) th seminorm is
\[
{p}_{n}\left( f\right) = \mathop{\sup }\limits_{\substack{{x \in \mathbb{R}} \\ {0 \leq j \leq n} }}{\left( 1 + \left| x\right| \right) }^{n}\left| {{f}^{\left( j\right) }\left( x\right) }\right| .
\]
\( \mathcal{S}\left( \mathbb{R}\right) \) is defined as the subset of \( {C}^{\infty }\left( \mathbb{R}\right) \) for which these seminorms are all finite.
VI. (From Sect. 3.1). Suppose \( m \) is Lebesgue measure on \( \left\lbrack {0,1
|
Corollary 3.34. Suppose \( X \) is a locally convex space over \( \mathbb{R} \) or \( \mathbb{C} \). Then the topology of \( X \) is given by a directed family of seminorms. This family can be chosen to be countable if \( X \) is first countable.
|
\( {\mathcal{B}}_{1} \) exists by Proposition 3.1.
Suppose \( {x}_{\alpha } \rightarrow x \) in the \( \mathcal{F} \)-topology, where \( \left\langle {x}_{\alpha }\right\rangle \) is a net, and \( \mathcal{F} \) is defined above. Then \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) in \( \mathbb{R} \) for all \( p \in \mathcal{F} \) since each \( p \in \mathcal{F} \) is continuous. Hence \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in {\mathcal{F}}_{0} \) by squeezing. On the other hand, if \( \left\langle {x}_{\alpha }\right\rangle \) is a net in \( X \), and \( x \in X \), and \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in {\mathcal{F}}_{0} \), then \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in \mathcal{F} \) (finite sums), so that for all \( n \in \mathbb{N} \), there exists \( \beta \) such that \( \alpha \succ \beta \Rightarrow p\left( {{x}_{\alpha } - x}\right) < {2}^{-n} \), that is \( {x}_{\alpha } \in x + B\left( {p,{2}^{-n}}\right) \). That is, \( {x}_{\alpha } \rightarrow x \) in the topology induced by \( \mathcal{F} \) if and only if \( p\left( {{x}_{\alpha } - x}\right) \rightarrow 0 \) for all \( p \in {\mathcal{F}}_{0} \). In particular, the convergent nets [and thus the topology, by Proposition 1.3(a)] does not depend on the ordering of the seminorms in Case 2 above.
|
Exercise 2.3.4 Show that \( \mathbb{Z}\left\lbrack \rho \right\rbrack /\left( \lambda \right) \) has order 3.
We can apply the arithmetic of \( \mathbb{Z}\left\lbrack \rho \right\rbrack \) to solve \( {x}^{3} + {y}^{3} + {z}^{3} = 0 \) for integers \( x, y, z \) . In fact we can show that \( {\alpha }^{3} + {\beta }^{3} + {\gamma }^{3} = 0 \) for \( \alpha ,\beta ,\gamma \in \mathbb{Z}\left\lbrack \rho \right\rbrack \) has no nontrivial solutions (i.e., where none of the variables is zero).
Example 2.3.5 Let \( \lambda = 1 - \rho ,\theta \in \mathbb{Z}\left\lbrack \rho \right\rbrack \) . Show that if \( \lambda \) does not divide \( \theta \) , then \( {\theta }^{3} \equiv \pm 1\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \) . Deduce that if \( \alpha ,\beta ,\gamma \) are coprime to \( \lambda \), then the equation \( {\alpha }^{3} + {\beta }^{3} + {\gamma }^{3} = 0 \) has no nontrivial solutions.
Solution. From the previous problem, we know that if \( \lambda \) does not divide \( \theta \) then \( \theta \equiv \pm 1\left( {\;\operatorname{mod}\;\lambda }\right) \) . Set \( \xi = \theta \) or \( - \theta \) so that \( \xi \equiv 1\left( {\;\operatorname{mod}\;\lambda }\right) \) . We write \( \xi \) as \( 1 + {d\lambda } \) . Then
\[
\pm \left( {{\theta }^{3} \mp 1}\right) = {\xi }^{3} - 1
\]
\[
= \left( {\xi - 1}\right) \left( {\xi - \rho }\right) \left( {\xi - {\rho }^{2}}\right)
\]
\[
= \left( {d\lambda }\right) \left( {{d\lambda } + 1 - \rho }\right) \left( {1 + {d\lambda } - {\rho }^{2}}\right)
\]
\[
= {d\lambda }\left( {{d\lambda } + \lambda }\right) \left( {{d\lambda } - \lambda {\rho }^{2}}\right)
\]
\[
= {\lambda }^{3}d\left( {d + 1}\right) \left( {d - {\rho }^{2}}\right) \text{.}
\]
Since \( {\rho }^{2} \equiv 1\left( {\;\operatorname{mod}\;\lambda }\right) \), then \( \left( {d - {\rho }^{2}}\right) \equiv \left( {d - 1}\right) \left( {\;\operatorname{mod}\;\lambda }\right) \) . We know from the preceding problem that \( \lambda \) divides one of \( d, d - 1 \), and \( d + 1 \), so we may conclude that \( {\xi }^{3} - 1 \equiv 0\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \), so \( {\xi }^{3} \equiv 1\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \) and \( \theta \equiv \pm 1 \) \( \left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \) . We can now deduce that no solution to \( {\alpha }^{3} + {\beta }^{3} + {\gamma }^{3} = 0 \) is possible with \( \alpha ,\beta \), and \( \gamma \) coprime to \( \lambda \), by considering this equation mod \( {\lambda }^{4} \) . Indeed, if such a solution were possible, then somehow the equation
\[
\pm 1 \pm 1 \pm 1 \equiv 0\;\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right)
\]
could be satisfied. The left side of this congruence gives \( \pm 1 \) or \( \pm 3 \) ; certainly \( \pm 1 \) is not congruent to \( 0\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \) since \( {\lambda }^{4} \) is not a unit. Also, \( \pm 3 \) is not congruent to \( 0\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \) since \( {\lambda }^{2} \) is an associate of 3, and thus \( {\lambda }^{4} \) is not. Thus, there is no solution to \( {\alpha }^{3} + {\beta }^{3} + {\gamma }^{3} = 0 \) if \( \alpha ,\beta ,\gamma \) are coprime to \( \lambda \) .
Hence if there is a solution to the equation of the previous example, one of \( \alpha ,\beta ,\gamma \) is divisible by \( \lambda \) . Say \( \gamma = {\lambda }^{n}\delta ,\left( {\delta ,\lambda }\right) = 1 \) . We get \( {\alpha }^{3} + {\beta }^{3} + {\delta }^{3}{\lambda }^{3n} = \) \( 0,\delta ,\alpha ,\beta \) coprime to \( \lambda \) .
Theorem 2.3.6 Consider the more general
\[
{\alpha }^{3} + {\beta }^{3} + \varepsilon {\lambda }^{3n}{\delta }^{3} = 0
\]
(2.1)
for a unit \( \varepsilon \) . Any solution for \( \delta ,\alpha ,\beta \) coprime to \( \lambda \) must have \( n \geq 2 \), but if (2.1) can be solved with \( n = m \), it can be solved for \( n = m - 1 \) . Thus, there are no solutions to the above equation with \( \delta ,\alpha ,\beta \) coprime to \( \lambda \) .
Proof. We know that \( n \geq 1 \) from Example 2.3.5. Considering the equation \( {\;\operatorname{mod}\;{\lambda }^{4}} \), we get that \( \pm 1 \pm 1 \pm \varepsilon {\lambda }^{3n} \equiv 0\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \) . There are two possibilities: if \( {\lambda }^{3n} \equiv \pm 2\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \), then certainly \( n \) cannot exceed 1; but if \( n = 1 \), then our congruence implies that \( \lambda \mid 2 \) which is not true. The other possibility is that \( {\lambda }^{3n} \equiv 0\left( {\;\operatorname{mod}\;{\lambda }^{4}}\right) \), from which it follows that \( n \geq 2 \) .
We may rewrite (2.1) as
\[
- \varepsilon {\lambda }^{3n}{\delta }^{3} = {\alpha }^{3} + {\beta }^{3}
\]
\[
= \left( {\alpha + \beta }\right) \left( {\alpha + {\rho \beta }}\right) \left( {\alpha + {\rho }^{2}\beta }\right) \text{.}
\]
We will write these last three factors as \( {A}_{1},{A}_{2} \), and \( {A}_{3} \) for convenience. We can see that \( {\lambda }^{6} \) divides the left side of this equation, since \( n \geq 2 \) . Thus \( {\lambda }^{6} \mid {A}_{1}{A}_{2}{A}_{3} \), and \( {\lambda }^{2} \mid {A}_{i} \) for some \( i \) . Notice that
\[
{A}_{1} - {A}_{2} = {\lambda \beta }
\]
\[
{A}_{1} - {A}_{3} = {\lambda \beta }{\rho }^{2}
\]
and
\[
{A}_{2} - {A}_{3} = {\lambda \beta \rho }
\]
Since \( \lambda \) divides one of the \( {A}_{i} \), it divides them all, since it divides their differences. Notice, though, that \( {\lambda }^{2} \) does not divide any of these differences, since \( \lambda \) does not divide \( \beta \) by assumption. Thus, the \( {A}_{i} \) are inequivalent \( {\;\operatorname{mod}\;{\lambda }^{2}} \), and only one of the \( {A}_{i} \) is divisible by \( {\lambda }^{2} \) . Since our equation is unchanged if we replace \( \beta \) with \( {\rho \beta } \) or \( {\rho }^{2}\beta \), then without loss of generality we may assume that \( {\lambda }^{2} \mid {A}_{1} \) . In fact, we know that
\[
{\lambda }^{{3n} - 2} \mid {A}_{1}
\]
Now we write
\[
{B}_{1} = {A}_{1}/\lambda
\]
\[
{B}_{2} = {A}_{2}/\lambda
\]
\[
{B}_{3} = {A}_{3}/\lambda
\]
We notice that these \( {B}_{i} \) are pairwise coprime, since if for some prime \( p \), we had \( p \mid {B}_{1} \) and \( p \mid {B}_{2} \), then necessarily we would have
\[
p \mid {B}_{1} - {B}_{2} = \beta
\]
and
\[
p \mid \lambda {B}_{1} + {B}_{2} - {B}_{1} = \alpha .
\]
This is only possible for a unit \( p \) since \( \gcd \left( {\alpha ,\beta }\right) = 1 \) . Similarly, we can verify that the remaining pairs of \( {B}_{i} \) are coprime. Since \( {\lambda }^{{3n} - 2} \mid {A}_{1} \), we have \( {\lambda }^{{3n} - 3} \mid {B}_{1} \) . So we may rewrite (2.1) as
\[
- \varepsilon {\lambda }^{{3n} - 3}{\delta }^{3} = {B}_{1}{B}_{2}{B}_{3}
\]
From this equation we can see that each of the \( {B}_{i} \) is an associate of a cube, since they are relatively prime, and we write
\[
{B}_{1} = {e}_{1}{\lambda }^{{3n} - 3}{C}_{1}^{3}
\]
\[
{B}_{2} = {e}_{2}{C}_{2}^{3}
\]
\[
{B}_{3} = {e}_{3}{C}_{3}^{3}
\]
for units \( {e}_{i} \), and pairwise coprime \( {C}_{i} \) . Now recall that
\[
{A}_{1} = \alpha + \beta
\]
\[
{A}_{2} = \alpha + {\rho \beta }
\]
\[
{A}_{3} = \alpha + {\rho }^{2}\beta
\]
From these equations we have that
\[
{\rho }^{2}{A}_{3} + \rho {A}_{2} + {A}_{1} = \alpha \left( {{\rho }^{2} + \rho + 1}\right) + \beta \left( {{\bar{\rho }}^{2} + \bar{\rho } + 1}\right)
\]
\[
= 0
\]
so we have that
\[
0 = {\rho }^{2}\lambda {B}_{3} + {\rho \lambda }{B}_{2} + \lambda {B}_{1}
\]
and
\[
0 = {\rho }^{2}{B}_{3} + \rho {B}_{2} + {B}_{1}
\]
We can then deduce that
\[
{\rho }^{2}{e}_{3}{C}_{3}^{3} + \rho {e}_{2}{C}_{2}^{3} + {e}_{1}{\lambda }^{{3n} - 3}{C}_{1}^{3} = 0
\]
so we can find units \( {e}_{4},{e}_{5} \) so that
\[
{C}_{3}^{3} + {e}_{4}{C}_{2}^{3} + {e}_{5}{\lambda }^{{3n} - 3}{C}_{1}^{3} = 0.
\]
Considering this equation \( {\;\operatorname{mod}\;{\lambda }^{3}} \), and recalling that \( n \geq 2 \), we get that \( \pm 1 \pm {e}_{4} \equiv 0\left( {\;\operatorname{mod}\;{\lambda }^{3}}\right) \) so \( {e}_{4} = \mp 1 \), and we rewrite our equation as
\[
{C}_{3}^{3} + {\left( \mp {C}_{2}\right) }^{3} + {e}_{5}{\lambda }^{3\left( {n - 1}\right) }{C}_{1}^{3} = 0.
\]
This is an equation of the same type as (2.1), so we can conclude that if there exists a solution for (2.1) with \( n = m \), then there exists a solution with \( n = m - 1 \) .
This establishes by descent that no nontrivial solution to (2.1) is possible in \( \mathbb{Z}\left\lbrack \rho \right\rbrack \) .
## 2.4 Some Further Examples
Example 2.4.1 Solve the equation \( {y}^{2} + 4 = {x}^{3} \) for integers \( x, y \) .
Solution. We first consider the case where \( y \) is even. It follows that \( x \) must also be even, which implies that \( {x}^{3} \equiv 0\left( {\;\operatorname{mod}\;8}\right) \) . Now, \( y \) is congruent to 0 or 2 (mod 4). If \( y \equiv 0\left( {\;\operatorname{mod}\;4}\right) \), then \( {y}^{2} + 4 \equiv 4\left( {\;\operatorname{mod}\;8}\right) \), so we can rule out this case. However, if \( y \equiv 2\left( {\;\operatorname{mod}\;4}\right) \), then \( {y}^{2} + 4 \equiv 0\left( {\;\operatorname{mod}\;8}\right) \) . Writing \( y = {2Y} \) with \( Y \) odd, and \( x = {2X} \), we have \( 4{Y}^{2} + 4 = 8{X}^{3} \), so that
\[
{Y}^{2} + 1 = 2{X}^{3}
\]
and
\[
\left( {Y + i}\right) \left( {Y - i}\right) = 2{X}^{3} = \left( {1 + i}\right) \left( {1 - i}\right) {X}^{3}.
\]
We note that \( {Y}^{2} + 1 \equiv 2\left( {\;\operatorname{mod}\;4}\right) \) and so \( {X}^{3} \) is odd. Now,
\[
{X}^{3} = \frac{\left( {Y + i}\right) \left( {Y - i}\right) }{\left( {1 + i}\right) \left( {1 - i}\right) }
\]
\[
= \left( {\frac{1 + Y}{2} + \frac{1 - Y}{2}i}\right) \left( {\frac{1 + Y}{2} - \frac{1 - Y}{2}i}\right)
\]
\[
= {\left( \frac{1 + Y}{2}\right) }^{2} + {
|
Exercise 2.3.4 Show that \( \mathbb{Z}\left\lbrack \rho \right\rbrack /\left( \lambda \right) \) has order 3.
|
null
|
Proposition 4.3.11 Let \( A \subseteq \left\lbrack {0,1}\right\rbrack \) be a strong measure zero set and \( f \) : \( \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{R} \) a continuous map. Then the set \( f\left( A\right) \) has strong measure zero.
Proof. Let \( \left( {a}_{n}\right) \) be any sequence of positive real numbers. We have to show that there exist open intervals \( {J}_{n}, n \in \mathbb{N} \), such that \( \left| {J}_{n}\right| \leq {a}_{n} \) and \( f\left( A\right) \subseteq \mathop{\bigcup }\limits_{n}{J}_{n} \) . Since \( f \) is uniformly continuous, for each \( n \) there is a positive real number \( {b}_{n} \) such that whenever \( X \subseteq \left\lbrack {0,1}\right\rbrack \) is of diameter at most \( {b}_{n} \) , the diameter of \( f\left( X\right) \) is at most \( {a}_{n} \) . Since \( A \) has strong measure zero, there are open intervals \( {I}_{n}, n \in \mathbb{N} \), such that \( \left| {I}_{n}\right| \leq {b}_{n} \) and \( A \subseteq \mathop{\bigcup }\limits_{n}{I}_{n} \) . Take \( {J}_{n} = f\left( {I}_{n}\right) \) .
Here are some interesting questions on strong measure zero sets. Is there an uncountable set of reals that is not a strong measure zero set? Do all measure zero sets have strong measure zero? We consider the second question first.
Example 4.3.12 It is easy to see that there is no sequence \( \left( {I}_{n}\right) \) of open intervals such that the length of \( {I}_{n} \) is at most \( {3}^{-\left( {n + 1}\right) } \) and \( \left( {I}_{n}\right) \) cover the Cantor ternary set \( \mathcal{C} \) . Hence, \( \mathcal{C} \) is not a strong measure zero set. It follows that not all measure zero sets have strong measure zero.
From 4.3.12 and 4.3.11 we get the following interesting result.
Proposition 4.3.13 No set of reals containing a perfect set has strong measure zero.
The Borel conjecture [20]: No uncountable set of reals is a strong measure zero set.
From 4.3.13 and 4.3.5, we now have the following.
Proposition 4.3.14 No uncountable analytic \( A \subseteq \mathbb{R} \) has strong measure zero.
Thus, no analytic set can be a counterexample to the Borel conjecture. It has been shown that the Borel conjecture is independent of \( \mathbf{{ZFC}} \) . The proof of this is obviously beyond the scope of this book. We refer the interested reader to [9]. Here, under the continuum hypothesis, we give an example of an uncountable strong measure zero set.
Exercise 4.3.15 (i) Show that there is a set \( A \) of reals of cardinality \( \mathfrak{c} \) such that \( A \cap C \) is countable for every closed, nowhere dense set. (Such a set \( A \) is called a Lusin set.)
(ii) Show that every Lusin set is a strong measure zero set.
Does \( \mathbf{{CH}} \) hold for coanalytic sets? This cannot be decided in \( \mathbf{{ZFC}} \) . However, in ZFC we can say something about the cardinalities of coanalytic sets-a coanalytic set is either countable or is of cardinality \( {\aleph }_{1} \) or \( \mathfrak{c} \) . We prove these facts now.
Let \( T \) be a well-founded tree on \( \mathbb{N} \) . Recall the definition of the rank function \( {\rho }_{T} : T \rightarrow \mathbf{{ON}} \) given in Chapter 1:
\[
{\rho }_{T}\left( u\right) = \sup \left\{ {{\rho }_{T}\left( v\right) + 1 : u \prec v, v \in T}\right\}, u \in T.
\]
(We take \( \sup \left( \varnothing \right) = 0 \) .) Note that \( {\rho }_{T}\left( u\right) = 0 \) if \( u \) is terminal in \( T \) .
We extend this notion for ill-founded trees too. Let \( T \) be an ill-founded tree and \( s \in {\mathbb{N}}^{ < \mathbb{N}} \) . Define
\[
{\rho }_{T}\left( s\right) = \left\{ \begin{array}{ll} 0 & \text{ if }s \notin T, \\ {\rho }_{{T}_{s}}\left( e\right) & \text{ if }s \in T\& {T}_{s}\text{ is well-founded,} \\ {\omega }_{1} & \text{ otherwise. } \end{array}\right.
\]
Note that \( T \) is well-founded if and only if \( {\rho }_{T}\left( e\right) < {\omega }_{1} \) .
Lemma 4.3.16 Let \( T \) be a tree on \( \mathbb{N} \times \mathbb{N} \) and \( \xi < {\omega }_{1} \) . For every \( s \in {\mathbb{N}}^{ < \mathbb{N}} \) ,
\[
{C}_{s}^{\xi } = \left\{ {\alpha \in {\mathbb{N}}^{\mathbb{N}} : {\rho }_{T\left\lbrack \alpha \right\rbrack }\left( s\right) \leq \xi }\right\}
\]
is Borel.
Proof. We prove the result by induction on \( \xi \) . Note that
\[
{C}_{s}^{0} = \left\{ {\alpha \in {\mathbb{N}}^{\mathbb{N}} : \forall i\left( {\left( {\alpha \mid \left( {\left| s\right| + 1}\right), s\widehat{}i}\right) \notin T}\right) }\right\} .
\]
So, \( {C}_{s}^{0} \) is Borel (in fact closed) for all \( s \) . Since for any countable ordinal \( \xi > 0 \)
\[
{C}_{s}^{\xi } = \mathop{\bigcap }\limits_{i}\mathop{\bigcup }\limits_{{\eta < \xi }}{C}_{{s}^{ \frown }i}^{\eta }
\]
the proof is easily completed by transfinite induction.
Theorem 4.3.17 Every coanalytic set is a union of \( {\aleph }_{1} \) Borel sets.
Proof. Let \( X \) be Polish and \( C \subseteq X \) coanalytic. By the Borel isomorphism theorem (3.3.13), without any loss of generality we may assume that \( X = {\mathbb{N}}^{\mathbb{N}} \) . By 4.1.20, there is a tree \( T \) on \( \mathbb{N} \times \mathbb{N} \) such that
\[
\alpha \in C \Leftrightarrow T\left\lbrack \alpha \right\rbrack \text{is well-founded.}
\]
So,
\[
\alpha \in C \Leftrightarrow {\rho }_{T\left\lbrack \alpha \right\rbrack }\left( e\right) < {\omega }_{1}
\]
Therefore,
\[
C = \mathop{\bigcup }\limits_{{\xi < {\omega }_{1}}}{C}_{e}^{\xi }
\]
where the \( {C}_{e}^{\xi } \) are as in 4.3.16.
The sets \( {C}_{e}^{\xi },\xi < {\omega }_{1} \), defined in the above proof are called the constituents of \( C \) . Since \( \mathbf{{CH}} \) holds for Borel sets, we now have the following result.
Theorem 4.3.18 A coanalytic set is either countable or of cardinality \( {\aleph }_{1} \) or \( \mathfrak{c} \) .
The following question remains: Does \( \mathbf{{CH}} \) hold for coanalytic sets? Another related question is, Is there an uncountable coanalytic set that does not contain a perfect set (equivalently, an uncountable Borel set)? Gödel[45] showed that in the universe \( L \) of constructible sets, which is a model of \( \mathbf{{ZFC}} \), there is an uncountable coanalytic set that does not contain a perfect set. (See also [49], p. 529.) On the other hand, under "analytic determinacy" ([53], p. 206) every uncountable coanalytic set contains a perfect set. Hence under this hypothesis every uncountable coanalytic set is of cardinality \( \mathfrak{c} \) . "Analytic determinacy" can be proved from the existence of large cardinals. Thus, the statement "there is an uncountable coanalytic set not containing a perfect set" cannot be decided in ZFC. Any further discussion on this topic is beyond the scope of these notes.
## 4.4 The First Separation Theorem
The separartion theorems and the dual results - the reduction theorems - are among the most important results on analytic and coanalytic sets, with far-reaching consequences on Borel sets.
Theorem 4.4.1 (The first separation theorem for analytic sets) Let \( A \) and \( B \) be disjoint analytic subsets of a Polish space \( X \) . Then there is a Borel set \( C \) such that
\[
A \subseteq C\text{and}B\bigcap C = \varnothing \text{.}
\]
\( \left( *\right) \)
(If \( \left( \star \right) \) is satisfied, we say that \( C \) separates \( A \) from \( B \) .)
The proof of this theorem is based on the following combinatorial lemma.
Lemma 4.4.2 Suppose \( E = \mathop{\bigcup }\limits_{n}{E}_{n} \) cannot be separated from \( F = \mathop{\bigcup }\limits_{m}{F}_{m} \) by a Borel set. Then there exist \( m, n \) such that \( {E}_{n} \) cannot be separated from \( {F}_{m} \) by a Borel set.
Proof. Suppose for every \( m, n \) there is a Borel set \( {C}_{mn} \) such that
\[
{E}_{n} \subseteq {C}_{mn}\text{ and }{F}_{m}\bigcap {C}_{mn} = \varnothing .
\]
It is fairly easy to check that the Borel set
\[
C = \mathop{\bigcup }\limits_{n}\mathop{\bigcap }\limits_{m}{C}_{mn}
\]
separates \( E \) from \( F \) .
Proof of 4.4.1. Let \( A \) and \( B \) be two disjoint analytic subsets of \( X \) . Suppose there is no Borel set \( C \) such that
\[
A \subseteq C\text{ and }B\bigcap C = \varnothing .
\]
We shall get a contradiction. Let \( f : {\mathbb{N}}^{\mathbb{N}} \rightarrow A \) and \( g : {\mathbb{N}}^{\mathbb{N}} \rightarrow B \) be continuous surjections. We shall get \( \alpha ,\beta \in {\mathbb{N}}^{\mathbb{N}} \) such that \( f\left( {\sum \left( {\alpha \mid n}\right) }\right) \) cannot be separated from \( g\left( {\sum \left( {\beta \mid n}\right) }\right) \) by a Borel set for any \( n \in \mathbb{N} \) .
We first complete the proof assuming that \( \alpha ,\beta \) satisfying the above properties have been defined. Since \( A \) and \( B \) are disjoint, \( f\left( \alpha \right) \neq g\left( \beta \right) \) . Since \( f \) and \( g \) are continuous, there exist disjoint open sets \( U \) and \( V \) containing \( f\left( \alpha \right) \) and \( g\left( \beta \right) \) respectively. By the continuity of \( f \) and \( g \), there exists an \( n \in \) \( \mathbb{N} \) such that \( f\left( {\sum \left( {\alpha \mid n}\right) }\right) \subseteq U \) and \( g\left( {\sum \left( {\beta \mid n}\right) }\right) \subseteq V \) . In particular, \( f\left( {\sum \left( {\alpha \mid n}\right) }\right) \) is separated from \( g\left( {\sum \left( {\beta \mid n}\right) }\right) \) by a Borel set. This is a contradiction.
Definition of \( \alpha ,\beta \) : We proceed by induction.
Since \( A = \bigcup f\left( {\sum \left( n\right) }\right) \) and \( B = \bigcup g\left( {\sum \left( m\right) }\right) \), by 4.4.2 there exist \( \alpha \left( 0\right) \) and \( \beta \left( 0\right) \) such that \( f\left( {\sum \left( {\alpha \left( 0\right) }\right) }\right) \) cannot be separated from \( g\left( {\sum \left( {\beta \left( 0\right) }\right) }\right) \) by a Borel set. Suppose \( \alpha \left( 0\right) ,\alpha \left( 1\right)
|
Proposition 4.3.11 Let \( A \subseteq \left\lbrack {0,1}\right\rbrack \) be a strong measure zero set and \( f \) : \( \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{R} \) a continuous map. Then the set \( f\left( A\right) \) has strong measure zero.
|
Let \( \left( {a}_{n}\right) \) be any sequence of positive real numbers. We have to show that there exist open intervals \( {J}_{n}, n \in \mathbb{N} \), such that \( \left| {J}_{n}\right| \leq {a}_{n} \) and \( f\left( A\right) \subseteq \mathop{\bigcup }\limits_{n}{J}_{n} \) . Since \( f \) is uniformly continuous, for each \( n \) there is a positive real number \( {b}_{n} \) such that whenever \( X \subseteq \left\lbrack {0,1}\right\rbrack \) is of diameter at most \( {b}_{n} \) , the diameter of \( f\left( X\right) \) is at most \( {a}_{n} \) . Since \( A \) has strong measure zero, there are open intervals \( {I}_{n}, n \in \mathbb{N} \), such that \( \left| {I}_{n}\right| \leq {b}_{n} \) and \( A \subseteq \mathop{\bigcup }\limits_{n}{I}_{n} \) . Take \( {J}_{n} = f\left( {I}_{n}\right) \) .
|
Example 17.9. Let \( G \) be a compact Abelian group and let \( {L}_{a} \) be the Koopman operator induced by the rotation by \( a \in G \) . Since every character \( \chi \in {G}^{ * } \) is an eigenfunction of \( {L}_{a} \) corresponding to the eigenvalue \( \chi \left( a\right) \in \mathbb{T} \) and since \( \operatorname{lin}{G}^{ * } \) is dense in \( \mathrm{C}\left( G\right) \) (Proposition 14.7), \( {L}_{a} \) has discrete spectrum on \( \mathrm{C}\left( G\right) \) . A fortiori, \( {L}_{a} \) has discrete spectrum also on \( {\mathrm{L}}^{p}\left( G\right) \) for every \( 1 \leq p < \infty \) .
Example 17.10. Let \( T \) be the Koopman operator of an ergodic measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) such that \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) is not finite-dimensional. Then \( T \) is not mean ergodic on \( {\mathrm{L}}^{\infty } \) by Proposition 12.28. A fortiori, \( T \) does not have discrete spectrum on \( {\mathrm{L}}^{\infty } \) .
In particular, the Koopman operator \( {L}_{a} \) of an irrational rotation \( \left( {\mathbb{T};a}\right) \) has discrete spectrum on \( {\mathrm{L}}^{2} \) but not on \( {\mathrm{L}}^{\infty } \) .
Suppose now that \( \left( {\mathrm{X};\varphi }\right) \) is an ergodic measure-preserving system with discrete spectrum. By the second part of Theorem 17.6, the system is Markov isomorphic to a rotation system \( \left( {G,\mathrm{\;m};a}\right) \) for a compact Abelian group \( G \) with Haar measure \( \mathrm{m} \) and some element \( a \in G \) . As the rotation system must be ergodic, too, the group \( G \) is monothetic with \( a \) being a generating element (Propositions 10.13 and 14.21). By Proposition 14.22, the dual \( {G}^{ * } \) of \( G \) is isomorphic to the subgroup
\[
\Gamma \mathrel{\text{:=}} \left\{ {\chi \left( a\right) : \chi \in {G}^{ * }}\right\} \subseteq \mathbb{T}.
\]
Under this isomorphism, by the Pontryagin duality theorem (Theorem 14.14), \( G \cong \) \( {\Gamma }^{ * } \) with \( a \in G \) corresponding to the canonical inclusion map \( \Gamma \rightarrow \mathbb{T},\chi \mapsto \chi \left( a\right) \) . Note that, by Proposition 14.24, \( \Gamma = {\sigma }_{\mathrm{p}}\left( {L}_{a}\right) \) is the point spectrum of the Koopman operator. Hence, the rotation system \( \left( {G,\mathrm{\;m};a}\right) \) can be determined from the original system \( \left( {\mathrm{X};\varphi }\right) \) in the following way:
1) Form \( \Gamma \mathrel{\text{:=}} {\sigma }_{\mathrm{p}}\left( {T}_{\varphi }\right) \), where \( {T}_{\varphi } \) is the Koopman operator of \( \left( {\mathrm{X};\varphi }\right) \) . Then \( \Gamma \) is a subgroup of \( \mathbb{T} \) .
2) Define \( G \mathrel{\text{:=}} {\Gamma }^{ * } \), the dual group of \( \Gamma \) . This is a compact Abelian group.
3) Let \( a \in G \) be the canonical inclusion map \( \Gamma \rightarrow \mathbb{T} \) .
4) Then \( \left( {\mathrm{X};\varphi }\right) \) is isomorphic to \( \left( {G,\mathrm{\;m};a}\right) \) .
In effect, we have proved the following fundamental result.
Theorem 17.11 (Halmos-von Neumann). Each ergodic measure-preserving system with discrete spectrum is isomorphic to an ergodic rotation system on a compact monothetic group.
More precisely, let \( \left( {\mathrm{X};\varphi }\right) \) be an ergodic measure-preserving system with discrete spectrum. Then the set \( \Gamma \) of unimodular eigenvalues of the associated Koopman operator is a subgroup of \( \mathbb{T} \), and \( \left( {\mathrm{X};\varphi }\right) \) is isomorphic to the rotation system \( \left( {G,\mathrm{\;m};a}\right) \), where \( G = {\Gamma }^{ * } \) is the dual group and \( a \in G \) is the canonical inclusion map \( \Gamma \rightarrow \mathbb{T} \) .
This theorem is of considerable interest. We therefore give now a direct proof of the Halmos-von Neumann theorem not relying on the Jacobs-de Leeuw-Glicksberg theory. We follow the steps 1)-4) from above.
Direct proof of Theorem 17.11. Let \( T \) be the Koopman operator of the ergodic system \( \left( {\mathrm{X};\varphi }\right) \) with discrete spectrum, and let \( \Gamma \mathrel{\text{:=}} {\sigma }_{\mathrm{p}}\left( T\right) \) be its point spectrum. By Proposition 7.18, each eigenvalue is unimodular and simple, and \( \Gamma \) is a subgroup of \( \mathbb{T} \) . Each eigenfunction is unimodular up to a multiplicative constant.
As a product of unimodular eigenfunctions is again an unimodular eigenfunction, the set
\[
A \mathrel{\text{:=}} {\mathrm{{cl}}}_{{\mathrm{L}}^{\infty }}\mathop{\bigcup }\limits_{{\lambda \in \Gamma }}\ker \left( {\lambda \mathrm{I} - T}\right)
\]
is a unital \( {C}^{ * } \) -subalgebra of \( {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) . By the Gelfand-Naimark theorem we may hence suppose that \( \mathrm{X} = \left( {K,\mu }\right) \) is a compact probability space, \( \mu \) has full support, \( \varphi : K \rightarrow K \) is continuous, and the unimodular eigenfunctions generate \( \mathrm{C}\left( K\right) \) .
The Koopman operator is mean ergodic on \( \mathrm{C}\left( K\right) \), since it is mean ergodic on each eigenspace and the linear span of the eigenspaces is dense in \( \mathrm{C}\left( K\right) \) . Moreover, fix \( \left( T\right) \) is one-dimensional (by ergodicity of \( \left( {K,\mu ;\varphi }\right) \) and since \( \mu \) has full support). By Theorem 10.6, the topological system is uniquely ergodic, i.e., \( \mu \) is the unique \( \varphi \) - invariant probability measure on \( K \) . Since \( \mu \) has full support, \( \left( {K;\varphi }\right) \) is even strictly ergodic. Hence, by Corollary 10.9, \( \left( {K;\varphi }\right) \) is minimal.
Now fix \( {x}_{0} \in K \) . For each \( \lambda \in \Gamma \) let \( {f}_{\lambda } \in \mathrm{C}\left( K\right) \) be the unique(!) function that satisfies \( T{f}_{\lambda } = \lambda {f}_{\lambda } \) and \( f\left( {x}_{0}\right) = 1 \) . Define
\[
\Phi : K \rightarrow H \mathrel{\text{:=}} {\mathbb{T}}^{\Gamma },\;\Phi \left( x\right) = {\left( {f}_{\lambda }\left( x\right) \right) }_{\lambda \in \Gamma }.
\]
Then \( H \) is a compact Abelian group and \( \Phi \) is continuous and injective (since the functions \( {f}_{\lambda } \) separate the points). Moreover, if \( a \mathrel{\text{:=}} {\left( \lambda \right) }_{\lambda \in \Gamma } \) is the inclusion map \( \Gamma \rightarrow \mathbb{T},\Phi \left( {\varphi \left( x\right) }\right) = {a\Phi }\left( x\right) \) for all \( x \in K \) . It follows that
\[
\Phi : \left( {K;\varphi }\right) \rightarrow \left( {H;a}\right)
\]
is an injective homomorphism of topological dynamical systems. Since \( \Phi \left( {x}_{0}\right) = {1}_{H} \) and \( \operatorname{orb}\left( {x}_{0}\right) \) is dense in \( K \) (by minimality),
\[
G \mathrel{\text{:=}} \Phi \left( K\right) = {\overline{\operatorname{orb}}}_{ + }\left( {1}_{H}\right) = \operatorname{cl}\left\{ {{a}^{n} : n \geq 0}\right\}
\]
is a monothetic subgroup of \( H \), and \( \Phi : \left( {K;\varphi }\right) \rightarrow \left( {G;a}\right) \) is an isomorphism of topological systems. The push-forward measure \( {\Phi }_{ * }\mu \) is invariant, hence it is the Haar measure. Therefore
\[
\Phi : \left( {K,\mu ;\varphi }\right) \rightarrow \left( {G,\mathrm{\;m};a}\right)
\]
is an isomorphism of measure-preserving systems.
As a last step, we show that \( G = {\Gamma }^{ * } \) . Note that, by uniqueness, \( {f}_{\lambda \cdot \eta } = {f}_{\lambda } \cdot {f}_{\eta } \) for all \( \lambda ,\eta \in \Gamma \) . Hence, every \( \Phi \left( x\right), x \in K \), is actually a character of \( \Gamma \), i.e., \( \Phi \left( K\right) = G \subseteq {\Gamma }^{ * } \subseteq H \) . Conversely, suppose that \( \lambda \in \Gamma \) is such that \( G \) is trivial on \( \lambda \) . Then \( {f}_{\lambda }\left( x\right) = 1 \) for all \( x \in K \), and in particular \( 1 = f\left( {\varphi \left( {x}_{0}\right) }\right) = \lambda {f}_{\lambda }\left( {x}_{0}\right) = \lambda \) . By duality theory (Corollary 14.5 and Theorem 14.14) it follows that \( G = {\Gamma }^{ * } \) .
Let us turn to some consequences of the Halmos-von Neumann theorem. The first is another characterization of the Kronecker factor.
Corollary 17.12. Let \( \left( {\mathrm{X};\varphi }\right) \) be a measure-preserving system. Then \( \operatorname{Kro}\left( {\mathrm{X};\varphi }\right) \) is the largest factor of \( \left( {\mathrm{X};\varphi }\right) \) which is isomorphic to a compact group rotation system.
The isomorphism problem consists in determining complete isomorphism invariants for (ergodic) measure-preserving systems, see, for instance, Rédei and Werndl (2012) for a historical account, but cf. also Section 18.4.7 below. The following corollary of the Halmos-von Neumann theorem states that for the class of discrete spectrum systems the point spectrum of the Koopman operator is such a complete isomorphism invariant.
Corollary 17.13. Two ergodic measure-preserving systems with discrete spectrum are isomorphic if and only if the Koopman operators have the same point spectrum.
Two measure-preserving systems \( \left( {\mathrm{X};\varphi }\right) \) and \( \left( {\mathrm{Y};\psi }\right) \) are called spectrally isomorphic if their Koopman operators on the \( {\mathrm{L}}^{2} \) -spaces are unitarily equivalent, that is, if there is a Hilbert space isomorphism (a unitary operator) \( S : {\mathrm{L}}^{2}\left( \mathrm{X}\right) \rightarrow {\mathrm{L}}^{2}\left( \mathrm{Y}\right) \) intertwining the Koopman operators, i.e., \( S{T}_{\varphi } = {T}_{\psi }S \) .
Corollary 17.14. Two ergodic measure-preserving systems with discrete spectrum are (Markov) isomorphic if and only if they are spectrally isomorphic.
Proof. By Corollary 12.12 and by the remark following it, Markov isomorphic systems are spectrally isomorphic.
Conversely, if two ergodic measure-preserving systems are spectrally isomorph
|
Example 17.9. Let \( G \) be a compact Abelian group and let \( {L}_{a} \) be the Koopman operator induced by the rotation by \( a \in G \) . Since every character \( \chi \in {G}^{ * } \) is an eigenfunction of \( {L}_{a} \) corresponding to the eigenvalue \( \chi \left( a\right) \in \mathbb{T} \) and since \( \operatorname{lin}{G}^{ * } \) is dense in \( \mathrm{C}\left( G\right) \) (Proposition 14.7), \( {L}_{a} \) has discrete spectrum on \( \mathrm{C}\left( G\right) \) . A fortiori, \( {L}_{a} \) has discrete spectrum also on \( {\mathrm{L}}^{p}\left( G\right) \) for every \( 1 \leq p < \infty \) .
|
null
|
Theorem 5. Let \( X = X\left( \omega \right) \) be a random element with values in the Borel space \( \left( {E,\mathcal{E}}\right) \) . Then there is a regular conditional distribution of \( X \) with respect to \( \mathcal{G} \subseteq \mathcal{F} \) .
Proof. Let \( \varphi = \varphi \left( e\right) \) be the function in Definition 9. By (2) in this definition \( \varphi \left( {X\left( \omega \right) }\right) \) is a random variable. Hence, by Theorem 4, we can define the conditional distribution \( Q\left( {\omega ;A}\right) \) of \( \varphi \left( {X\left( \omega \right) }\right) \) with respect to \( \mathcal{G}, A \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) \) .
We introduce the function \( \widetilde{Q}\left( {\omega ;B}\right) = Q\left( {\omega ;\varphi \left( B\right) }\right), B \in \mathcal{E} \) . By (3) of Definition 9, \( \varphi \left( B\right) \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) \) and consequently \( \widetilde{Q}\left( {\omega ;B}\right) \) is defined. Evidently \( \widetilde{Q}\left( {\omega ;B}\right) \) is a measure in \( B \in \mathcal{E} \) for every \( \omega \) . Now fix \( B \in \mathcal{E} \) . By the one-to-one character of the mapping \( \varphi = \varphi \left( e\right) \) ,
\[
\widetilde{Q}\left( {\omega ;B}\right) = Q\left( {\omega ;\varphi \left( B\right) }\right) = \mathsf{P}\{ \varphi \left( X\right) \in \varphi \left( B\right) \mid \mathcal{G}\} \left( \omega \right) = \mathsf{P}\{ X \in B \mid \mathcal{G}\} \left( \omega \right) \;\text{ (a. s.). }
\]
Therefore \( \widetilde{Q}\left( {\omega ;B}\right) \) is a regular conditional distribution of \( X \) with respect to \( \mathcal{G} \) .
This completes the proof of the theorem.
Corollary. Let \( X = X\left( \omega \right) \) be a random element with values in a complete separable metric space \( \left( {E,\mathcal{E}}\right) \) . Then there is a regular conditional distribution of \( X \) with respect to \( \mathcal{G} \) . In particular, such a distribution exists for the spaces \( \left( {{R}^{n},\mathcal{B}\left( {R}^{n}\right) }\right) \) and \( \left( {{R}^{\infty },\mathcal{B}\left( {R}^{\infty }\right) }\right) . \)
The proof follows from Theorem 5 and the well-known topological result that such spaces \( \left( {E,\mathcal{E}}\right) \) are Borel spaces.
8. The theory of conditional expectations developed above makes it possible to give a generalization of Bayes's theorem; this has applications in statistics.
Recall that if \( \mathcal{D} = \left\{ {{A}_{1},\ldots ,{A}_{n}}\right\} \) is a partition of the space \( \Omega \) with \( \mathrm{P}\left( {A}_{i}\right) > 0 \) , Bayes's theorem, see (9) in Sect. 3 of Chap. 1, states that
\[
\mathrm{P}\left( {{A}_{i} \mid B}\right) = \frac{\mathrm{P}\left( {A}_{i}\right) \mathrm{P}\left( {B \mid {A}_{i}}\right) }{\mathop{\sum }\limits_{{j = 1}}^{n}\mathrm{P}\left( {A}_{j}\right) \mathrm{P}\left( {B \mid {A}_{j}}\right) }
\]
(25)
for every \( B \) with \( \mathrm{P}\left( B\right) > 0 \) . Therefore if \( \theta = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{I}_{{A}_{i}} \) is a discrete random variable then, according to (10) in Sect. 8 of Chap. 1,
\[
\mathrm{E}\left\lbrack {g\left( \theta \right) \mid B}\right\rbrack = \frac{\mathop{\sum }\limits_{{i = 1}}^{n}g\left( {a}_{i}\right) \mathrm{P}\left( {A}_{i}\right) \mathrm{P}\left( {B \mid {A}_{i}}\right) }{\mathop{\sum }\limits_{{j = 1}}^{n}\mathrm{P}\left( {A}_{j}\right) \mathrm{P}\left( {B \mid {A}_{j}}\right) },
\]
(26)
or
\[
\mathrm{E}\left\lbrack {g\left( \theta \right) \mid B}\right\rbrack = \frac{{\int }_{-\infty }^{\infty }g\left( a\right) \mathrm{P}\left( {B \mid \theta = a}\right) {P}_{\theta }\left( {da}\right) }{{\int }_{-\infty }^{\infty }\mathrm{P}\left( {B \mid \theta = a}\right) {P}_{\theta }\left( {da}\right) },
\]
(27)
where \( {P}_{\theta }\left( A\right) = \mathrm{P}\{ \theta \in A\} \) .
On the basis of the definition of \( \mathrm{E}\left\lbrack {g\left( \theta \right) \mid B}\right\rbrack \) given at the beginning of this section, it is easy to establish that (27) holds for all events \( B \) with \( \mathrm{P}\left( B\right) > 0 \), random variables \( \theta \) and functions \( g = g\left( a\right) \) with \( \mathrm{E}\left| {g\left( \theta \right) }\right| < \infty \) .
We now consider an analog of (27) for conditional expectations \( \mathrm{E}\left\lbrack {g\left( \theta \right) \mid \mathcal{G}}\right\rbrack \) with respect to a \( \sigma \) -algebra \( \mathcal{G},\mathcal{G} \subseteq \mathcal{F} \) .
Let
\[
\mathbf{Q}\left( B\right) = {\int }_{B}g\left( {\theta \left( \omega \right) }\right) \mathbf{P}\left( {d\omega }\right) ,\;B \in \mathcal{G}.
\]
(28)
Then by (4)
\[
\mathrm{E}\left\lbrack {g\left( \theta \right) \mid \mathcal{G}}\right\rbrack \left( \omega \right) = \frac{d\mathrm{Q}}{d\mathrm{P}}\left( \omega \right)
\]
(29)
We also consider the \( \sigma \) -algebra \( {\mathcal{G}}_{\theta } \) . Then, by (5),
\[
\mathrm{P}\left( B\right) = {\int }_{\Omega }\mathrm{P}\left( {B \mid {\mathcal{G}}_{\theta }}\right) d\mathrm{P}
\]
(30)
or, by the formula for change of variable in Lebesgue integrals,
\[
\mathrm{P}\left( B\right) = {\int }_{-\infty }^{\infty }\mathrm{P}\left( {B \mid \theta = a}\right) {P}_{\theta }\left( {da}\right) .
\]
(31)
Since
\[
\mathrm{Q}\left( B\right) = \mathrm{E}\left\lbrack {g\left( \theta \right) {I}_{B}}\right\rbrack = \mathrm{E}\left\lbrack {g\left( \theta \right) \cdot \mathrm{E}\left( {{I}_{B} \mid {\mathcal{G}}_{\theta }}\right) }\right\rbrack
\]
we have
\[
\mathrm{Q}\left( B\right) = {\int }_{-\infty }^{\infty }g\left( a\right) \mathrm{P}\left( {B \mid \theta = a}\right) {P}_{\theta }\left( {da}\right) .
\]
(32)
Now suppose that the conditional probability \( \mathrm{P}\left( {B \mid \theta = a}\right) \) is regular and admits the representation
\[
\mathrm{P}\left( {B \mid \theta = a}\right) = {\int }_{B}\rho \left( {\omega ;a}\right) \lambda \left( {d\omega }\right)
\]
(33)
where \( \rho = \rho \left( {\omega ;a}\right) \) is nonnegative and measurable in the two variables jointly, and \( \lambda \) is a \( \sigma \) -finite measure on \( \left( {\Omega ,\mathcal{G}}\right) \) .
Let \( \mathrm{E}\left| {g\left( \theta \right) }\right| < \infty \) . Let us show that (P-a. s.)
\[
\mathrm{E}\left\lbrack {g\left( \theta \right) \mid \mathcal{G}}\right\rbrack \left( \omega \right) = \frac{{\int }_{-\infty }^{\infty }g\left( a\right) \rho \left( {\omega ;a}\right) {P}_{\theta }\left( {da}\right) }{{\int }_{-\infty }^{\infty }\rho \left( {\omega, a}\right) {P}_{\theta }\left( {da}\right) }
\]
(34)
(generalized Bayes theorem).
In proving (34) we shall need the following lemma.
Lemma. Let \( \left( {\Omega ,\mathcal{F}}\right) \) be a measurable space.
(a) Let \( \mu \) and \( \lambda \) be \( \sigma \) -finite measures, \( \mu \ll \lambda \), and \( f = f\left( \omega \right) \) an \( \mathcal{F} \) -measurable function. Then
\[
{\int }_{\Omega }{fd\mu } = {\int }_{\Omega }f\frac{d\mu }{d\lambda }{d\lambda }
\]
(35)
(in the sense that if either integral exists, the other exists and they are equal). (b) If \( v \) is a signed measure and \( \mu ,\lambda \) are \( \sigma \) -finite measures, \( v \ll \mu ,\mu \ll \lambda \), then
\[
\frac{dv}{d\lambda } = \frac{dv}{d\mu } \cdot \frac{d\mu }{d\lambda }\;\left( {\lambda \text{-a.s. }}\right)
\]
(36)
and
\[
\frac{dv}{d\mu } = \frac{dv}{d\lambda }/\frac{d\mu }{d\lambda }\;\left( {\mu \text{-a.s. }}\right)
\]
(37)
Proof. (a) Since
\[
\mu \left( A\right) = {\int }_{A}\left( \frac{d\mu }{d\lambda }\right) {d\lambda },\;A \in \mathcal{F},
\]
(35) is evidently satisfied for simple functions \( f = \sum {f}_{i}{I}_{{A}_{i}} \) . The general case follows from the representation \( f = {f}^{ + } - {f}^{ - } \) and the monotone convergence theorem (cf. the proof of (39) in Sect. 6).
(b) From (a) with \( f = {dv}/{d\mu } \) we obtain
\[
v\left( A\right) = {\int }_{A}\left( \frac{dv}{d\mu }\right) {d\mu } = {\int }_{A}\left( \frac{dv}{d\mu }\right) \left( \frac{d\mu }{d\lambda }\right) {d\lambda }
\]
Then \( \nu \ll \lambda \) and therefore
\[
v\left( A\right) = {\int }_{A}\frac{d\nu }{d\lambda }{d\lambda }
\]
whence (36) follows since \( A \) is arbitrary, by Property \( \mathbf{I} \) (Sect. 6).
Property (37) follows from (36) and the remark that
\[
\mu \left\{ {\omega : \frac{d\mu }{d\lambda } = 0}\right\} = {\int }_{\{ \omega : {d\mu }/{d\lambda } = 0\} }\frac{d\mu }{d\lambda }{d\lambda } = 0
\]
(on the set \( \{ \omega : {d\mu }/{d\lambda } = 0\} \) the right-hand side of (37) can be defined arbitrarily, for example as zero). This completes the proof of the lemma.
口
To prove (34) we observe that by Fubini's theorem and (33),
\[
\mathbf{Q}\left( B\right) = {\int }_{B}\left\lbrack {{\int }_{-\infty }^{\infty }g\left( a\right) \rho \left( {\omega ;a}\right) {P}_{\theta }\left( {da}\right) }\right\rbrack \lambda \left( {d\omega }\right) ,
\]
(38)
\[
\mathrm{P}\left( B\right) = {\int }_{B}\left\lbrack {{\int }_{-\infty }^{\infty }\rho \left( {\omega ;a}\right) {P}_{\theta }\left( {da}\right) }\right\rbrack \lambda \left( {d\omega }\right) .
\]
(39)
Then by the lemma
\[
\frac{d\mathrm{Q}}{d\mathrm{P}} = \frac{d\mathrm{Q}/{d\lambda }}{d\mathrm{P}/{d\lambda }}\;\text{ (P-a. s.). }
\]
Taking account of (38), (39) and (29), we have (34).
Remark 7. Formula (34) remains valid if we replace \( \theta \) by a random element with values in some measurable space \( \left( {E,\mathcal{E}}\right) \) (and replace integration over \( R \) by integration over \( E \) ).
Let us consider some special cases of (34).
Let the \( \sigma \) -algebra \( \mathcal{G} \) be generated by the random variable \( \xi ,\mathcal{G} = {\mathcal{G}}_{\xi } \) . Suppose that
\[
\mathrm{P}\left( {\xi \in A \mid \theta = a}\right) = {\int }_{A}q\left( {x;a}\right) \lambda \left( {dx}\right) ,\;A \in \mathcal{B}\left( R\right) ,
\]
(40)
wher
|
Theorem 5. Let \( X = X\left( \omega \right) \) be a random element with values in the Borel space \( \left( {E,\mathcal{E}}\right) \) . Then there is a regular conditional distribution of \( X \) with respect to \( \mathcal{G} \subseteq \mathcal{F} \) .
|
Let \( \varphi = \varphi \left( e\right) \) be the function in Definition 9. By (2) in this definition \( \varphi \left( {X\left( \omega \right) }\right) \) is a random variable. Hence, by Theorem 4, we can define the conditional distribution \( Q\left( {\omega ;A}\right) \) of \( \varphi \left( {X\left( \omega \right) }\right) \) with respect to \( \mathcal{G}, A \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) \) .
We introduce the function \( \widetilde{Q}\left( {\omega ;B}\right) = Q\left( {\omega ;\varphi \left( B\right) }\right), B \in \mathcal{E} \) . By (3) of Definition 9, \( \varphi \left( B\right) \in \varphi \left( E\right) \cap \mathcal{B}\left( R\right) \) and consequently \( \widetilde{Q}\left( {\omega ;B}\right) \) is defined. Evidently \( \widetilde{Q}\left( {\omega ;B}\right) \) is a measure in \( B \in \mathcal{E} \) for every \( \omega \) . Now fix \( B \in \mathcal{E} \) . By the one-to-one character of the mapping \( \varphi = \varphi \left( e\right) \) ,
\[
\widetilde{Q}\left( {\omega ;B}\right) = Q\left( {\omega ;\varphi \left( B\right) }\right) = \mathsf{P}\{ \varphi \left( X\right)
|
Proposition 3.1. Let \( \left( {U, x}\right) \) be a chart around \( p \) . Then any tangent vector \( v \in {M}_{p} \) can be uniquely written as a linear combination \( v = \mathop{\sum }\limits_{i}{\alpha }_{i}\partial /\partial {x}^{i}\left( p\right) \) . In fact, \( {\alpha }_{i} = v\left( {x}^{i}\right) \) .
Thus, \( {M}_{p}^{n} \) is an \( n \) -dimensional vector space with basis \( {\left\{ \partial /\partial {x}^{i}\left( p\right) \right\} }_{1 \leq i \leq n} \) .
Proof. We may assume without loss of generality that \( x\left( p\right) = 0 \), and that \( x\left( U\right) \) is star-shaped. By Lemma 3.1, any \( f \in \mathcal{F}M \) satisfies \( f \circ {x}^{-1} = \) \( f\left( p\right) + \sum {u}^{i}{\psi }_{i} \), with \( {\psi }_{i}\left( 0\right) = \partial /\partial {x}^{i}\left( p\right) \left( f\right) \) . Thus, \( {\left. f\right| }_{U} = f\left( p\right) + \mathop{\sum }\limits_{i}{x}^{i}\left( {{\psi }_{i} \circ x}\right) {\left. \right| }_{U} \) , and
\[
v\left( f\right) = v\left( {f\left( p\right) }\right) + \mathop{\sum }\limits_{i}\left\lbrack {v\left( {x}^{i}\right) \cdot {\psi }_{i}\left( 0\right) + {x}^{i}\left( p\right) \cdot v\left( {{\psi }_{i} \circ x}\right) }\right\rbrack = \mathop{\sum }\limits_{i}v\left( {x}^{i}\right) \frac{\partial }{\partial {x}^{i}}\left( p\right) \left( f\right) ,
\]
where we have used the result of Exercise 5 below. It remains to show that the \( \partial /\partial {x}^{i}\left( p\right) \) are linearly independent; observe that
\[
\frac{\partial }{\partial {x}^{i}}\left( p\right) \left( {x}^{j}\right) = {D}_{i}\left( {{x}^{j} \circ {x}^{-1}}\right) \left( 0\right) = {D}_{i}\left( {u}^{j}\right) \left( 0\right) = {\delta }_{ij}.
\]
Thus, if \( \sum {\alpha }_{i}\partial /\partial {x}^{i}\left( p\right) = 0 \), then \( 0 = \sum {\alpha }_{i}\partial /\partial {x}^{i}\left( p\right) \left( {x}^{j}\right) = {\alpha }_{j} \) .
Notice that if \( x \) and \( y \) are two coordinate systems at \( p \), then taking \( v = \) \( \partial /\partial {y}^{i}\left( p\right) \) in Proposition 3.1 yields
(3.2)
\[
\frac{\partial }{\partial {y}^{i}}\left( p\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial {x}^{j}}{\partial {y}^{i}}\left( p\right) \frac{\partial }{\partial {x}^{j}}\left( p\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{D}_{i}\left( {{u}^{j} \circ x \circ {y}^{-1}}\right) \left( {y\left( p\right) }\right) \frac{\partial }{\partial {x}^{j}}\left( p\right)
\]
for \( 1 \leq i \leq n \) . This means that the transition matrix from the basis \( \left\{ {\partial /\partial {x}^{i}\left( p\right) }\right\} \) to the basis \( \left\{ {\partial /\partial {y}^{i}\left( p\right) }\right\} \) is the Jacobian matrix of \( x \circ {y}^{-1} \) at \( y\left( p\right) \) .
EXERCISE 5. Let \( c \in \mathbb{R} \) . Show that if \( c \in \mathcal{F}M \) denotes the constant function \( c\left( p\right) : \equiv c \) for all \( p \in M \), then \( v\left( c\right) = 0 \) for any tangent vector \( v \) at any point of \( M \) .
EXERCISE 6. Write down (3.2) explicitly for the \( n \) -sphere of radius \( r \), if \( x \) and \( y \) denote stereographic projections.
## 4. The Derivative
In calculus, one usually thinks of the Jacobian \( {Df}\left( p\right) \) of \( f : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{k} \) as the derivative of \( f \) at \( p \) . It is therefore natural, when seeking a meaningful generalization of this concept for a map \( f : M \rightarrow N \) between manifolds \( M \) and \( N \), to look for a linear transformation. In view of the previous section, where we defined vector spaces at each point of a manifold, this suggests a linear transformation \( {f}_{*p} : {M}_{p} \rightarrow {N}_{f\left( p\right) } \) between the respective tangent spaces. We would of course like \( {f}_{*p} \) to correspond to \( {Df}\left( p\right) \) when \( M = {\mathbb{R}}^{n} \) and \( N = {\mathbb{R}}^{k} \) , if \( {\mathbb{R}}_{p}^{n} \) is identified with the set of pairs \( \left( {p, v}\right), v \in {\mathbb{R}}^{n} \) ; i.e, we require that \( {f}_{*p}\left( {p, v}\right) = \left( {f\left( p\right) ,{Df}\left( p\right) v}\right) \) for all \( v \in {\mathbb{R}}^{n} \) . Now, if \( \phi : {\mathbb{R}}^{k} \rightarrow \mathbb{R} \) is differentiable, then by the Chain rule,
\[
{f}_{*p}\left( {p, v}\right) \left( \phi \right) = \left( {f\left( p\right) ,{Df}\left( p\right) v}\right) \left( \phi \right) = {D}_{{Df}\left( p\right) v}\phi \left( {f\left( p\right) }\right) = {D\phi }\left( {f\left( p\right) }\right) {Df}\left( p\right) v
\]
\[
= {D}_{v}\left( {\phi \circ f}\right) \left( p\right) = \left( {p, v}\right) \left( {\phi \circ f}\right) .
\]
This motivates the following:
Definition 4.1. Let \( M \) and \( N \) denote differentiable manifolds of dimensions \( n \) and \( k \) respectively, \( f : U \rightarrow N \) a differentiable map, where \( U \) is open in \( M \), and \( p \in U \) . The derivative of \( f \) at \( p \) is the map \( {f}_{*p} : {M}_{p} \rightarrow {N}_{f\left( p\right) } \) given by
\[
\left( {{f}_{*p}v}\right) \left( \phi \right) \mathrel{\text{:=}} v\left( {\phi \circ f}\right) ,\;\phi \in \mathcal{F}\left( N\right) ,\;v \in {M}_{p}.
\]
It is clear from the definition that \( {f}_{*p} \) is a linear transformation.
Proposition 4.1. With notation as in Definition 4.1, let \( x \) be a coordinate map around \( p \in U, y \) a coordinate map around \( f\left( p\right) \in N \) . Then the matrix of \( {f}_{*p} \) with respect to the bases \( \left\{ {\partial /\partial {x}^{i}\left( p\right) }\right\} \) and \( \left\{ {\partial /\partial {y}^{j}(\left( {f\left( p\right) }\right) \} }\right. \) is the Jacobian matrix of \( y \circ f \circ {x}^{-1} \) at \( x\left( p\right) \) .
Proof.
\[
{f}_{*p}\frac{\partial }{\partial {x}^{j}}\left( p\right) = \mathop{\sum }\limits_{i}{f}_{*p}\frac{\partial }{\partial {x}^{j}}\left( p\right) \left( {y}^{i}\right) \frac{\partial }{\partial {y}^{i}}\left( {f\left( p\right) }\right) = \mathop{\sum }\limits_{i}\frac{\partial }{\partial {x}^{j}}\left( p\right) \left( {{y}^{i} \circ f}\right) \frac{\partial }{\partial {y}^{i}}\left( {f\left( p\right) }\right)
\]
\[
= \mathop{\sum }\limits_{i}{D}_{j}\left( {{u}^{i} \circ \left( {y \circ f \circ {x}^{-1}}\right) }\right) \left( {x\left( p\right) }\right) \frac{\partial }{\partial {y}^{i}}\left( {f\left( p\right) }\right) .
\]
EXAMPLES AND REMARKS 4.1. (i) It follows from Definition 4.1 that the identity map \( {1}_{M} \) of \( M \) has as derivative at \( p \in M \) the identity map \( {1}_{{M}_{p}} \) of \( {M}_{p} \) .
(ii) If \( g : N \rightarrow Q \) is differentiable, then \( g \circ f \) is differentiable, and \( {\left( g \circ f\right) }_{*p} = \) \( {g}_{*f\left( p\right) } \circ {f}_{*p} \) . In particular, if \( f : M \rightarrow N \) is a diffeomorphism, then by (i), \( {f}_{*p} \) is an isomorphism with inverse \( {\left( {f}^{-1}\right) }_{*f\left( p\right) } \) . Furthermore, given coordinate maps \( x \) and \( y \) of \( M \) and \( N \) respectively, the diagram
\[
\begin{array}{l} {M}_{p}\;\xrightarrow[]{{f}_{*p}}\;{N}_{f\left( p\right) } \\ {x}_{*p} \\ {x}_{*p}^{n} \\ {x}_{x\left( p\right) }^{n}\xrightarrow[]{{\left( y \circ f \circ {x}^{-1}\right) }_{*x\left( p\right) }}{\mathbb{R}}_{\left( {y \circ f}\right) \left( p\right) }^{k} \\ \end{array}
\]
commutes. Observe that \( {x}_{*p}\partial /\partial {x}^{i}\left( p\right) = \partial /\partial {u}^{i}\left( {x\left( p\right) }\right) \), since \( {x}_{ * }\partial /\partial {x}^{i}\left( {u}^{j}\right) = \) \( \partial /\partial {x}^{i}\left( {{u}^{j} \circ x}\right) = \partial /\partial {x}^{i}\left( {x}^{j}\right) = {\delta }_{ij}. \)
(iii) A (smooth) curve in \( M \) is a (smooth) map \( c : I \rightarrow M \), where \( I \) is an interval of real numbers. The tangent vector to \( c \) at \( t \) is \( \dot{c}\left( t\right) \mathrel{\text{:=}} {c}_{*t}D\left( t\right) \) . Thus, given \( \phi \in \mathcal{F}\left( M\right) \) ,
\[
\dot{c}\left( t\right) \left( \phi \right) = {c}_{*t}D\left( t\right) \left( \phi \right) = D\left( t\right) \left( {\phi \circ c}\right) = {\left( \phi \circ c\right) }^{\prime }\left( t\right) .
\]
(iv) Let \( E \) be an \( n \) -dimensional real vector space with its canonical differentiable structure, cf. Examples and Remarks 1.1(iii). For any \( v \in E, E \) may be naturally identified with its tangent space \( {E}_{v} \) at \( v \) by "parallel translation" \( {\mathcal{J}}_{v} : E \rightarrow {E}_{v} \), defined as follows: Given \( w \in E \), let \( \gamma \left( t\right) = v + {tw} \), and set \( {\mathcal{J}}_{v}w \mathrel{\text{:=}} \dot{\gamma }\left( 0\right) \) . If \( x : E \rightarrow {\mathbb{R}}^{n} \) is any isomorphism, then
\[
{\mathcal{J}}_{v}w = \dot{\gamma }\left( 0\right) = \mathop{\sum }\limits_{i}\dot{\gamma }\left( 0\right) \left( {x}^{i}\right) \frac{\partial }{\partial {x}^{i}}\left( v\right) = \mathop{\sum }\limits_{i}D\left( 0\right) \left( {{x}^{i} \circ \gamma }\right) \frac{\partial }{\partial {x}^{i}}\left( v\right)
\]
\[
= \mathop{\sum }\limits_{i}{x}^{i}\left( w\right) \frac{\partial }{\partial {x}^{i}}\left( v\right)
\]
so that \( {\mathcal{J}}_{v} \), being linear and one-to-one, is an isomorphism.
Notice that for \( E = {\mathbb{R}}^{n} \) and \( x = {1}_{{\mathbb{R}}^{n}} \), we obtain \( {\mathcal{J}}_{v}{\mathbf{e}}_{i} = \partial /\partial {u}^{i}\left( v\right) \) . This formalizes our heuristic description of the tangent space of \( {\mathbb{R}}^{n} \) at \( v \) from the previous section, since the map
\[
\{ v\} \times {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}_{v}^{n}
\]
\[
\left( {v, w}\right) \mapsto {\mathcal{J}}_{v}w
\]
is an isomorphism that preserves the action on \( \mathcal{F}\left( {\mathbb{R}}^{n}\right) \) .
Consider, for example, a linear transformation \( L : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{k} \) . By Proposition 4.1, the matrix of \( {L
|
Proposition 3.1. Let \( \left( {U, x}\right) \) be a chart around \( p \) . Then any tangent vector \( v \in {M}_{p} \) can be uniquely written as a linear combination \( v = \mathop{\sum }\limits_{i}{\alpha }_{i}\partial /\partial {x}^{i}\left( p\right) \) . In fact, \( {\alpha }_{i} = v\left( {x}^{i}\right) \) .
|
We may assume without loss of generality that \( x\left( p\right) = 0 \), and that \( x\left( U\right) \) is star-shaped. By Lemma 3.1, any \( f \in \mathcal{F}M \) satisfies \( f \circ {x}^{-1} = \) \( f\left( p\right) + \sum {u}^{i}{\psi }_{i} \), with \( {\psi }_{i}\left( 0\right) = \partial /\partial {x}^{i}\left( p\right) \left( f\right) \) . Thus, \( {\left. f\right| }_{U} = f\left( p\right) + \mathop{\sum }\limits_{i}{x}^{i}\left( {{\psi }_{i} \circ x}\right) {\left. \right| }_{U} \) , and
\[
v\left( f\right) = v\left( {f\left( p\right) }\right) + \mathop{\sum }\limits_{i}\left\lbrack {v\left( {x}^{i}\right) \cdot {\psi }_{i}\left( 0\right) + {x}^{i}\left( p\right) \cdot v\left( {{\psi }_{i} \circ x}\right) }\right\rbrack = \mathop{\sum }\limits_{i}v\left( {x}^{i}\right) \frac{\partial }{\partial {x}^{i}}\left( p\right) \left( f\right) ,
\]
where we have used the result of Exercise 5 below. It remains to show that the \( \partial /\partial {x}^{i}\left( p\right) \) are linearly independent; observe that
\[
\frac{\partial }{\partial {x}^{i}}\left( p\right) \left( {x}^{j}\right) = {D}_{i}\left( {{x}^{j} \circ {x}^{-1}}\right) \left( 0\right) = {D}_{i}\left( {u}^{j}\right) \left( 0\right) = {\delta }_{ij}.
\]
Thus, if \( \sum {\alpha }_{i}\partial /\partial {x}^{i}\left( p\right) = 0 \), then \( 0 = \sum {\alpha }_{i}\partial /\partial {x}^{i}\left( p\right) \left( {x}^{j}\right) = {\alpha }_{j} \) .
|
Theorem 1.29. Every bounded linear functional \( \Lambda \) on a Hilbert space \( \mathcal{H} \) is given by inner product with a (unique) fixed vector \( {h}_{0} \) in \( \mathcal{H} : \Lambda \left( h\right) = \left\langle {h,{h}_{0}}\right\rangle \) . Moreover, the norm of the linear functional \( \Lambda \) is \( \begin{Vmatrix}{h}_{0}\end{Vmatrix} \) .
Proof. Suppose \( \Lambda \) is a bounded linear functional on \( \mathcal{H} \) . If \( \Lambda \) is identically 0, choose \( {h}_{0} = 0 \) . Otherwise, set
\[
M = \ker \Lambda \equiv \{ h \in \mathcal{H} : \Lambda \left( h\right) = 0\} .
\]
Since \( \Lambda \) is linear, \( M \) is a subspace of \( \mathcal{H} \), and since \( \Lambda \) is continuous, \( M = {\Lambda }^{-1}\left( 0\right) \) is closed. Note that \( M \neq \mathcal{H} \) since we are assuming \( \Lambda \neq 0 \) . Pick a nonzero vector \( z \in {M}^{ \bot } \) . By scaling if necessary we may assume \( \Lambda \left( z\right) = 1 \) . Consider, for arbitrary \( h \in \mathcal{H} \), the vector \( \Lambda \left( h\right) z - h \) and observe that if we apply \( \Lambda \) to this vector we get 0, i.e., it lies in \( M \) . Since \( z \) was chosen to lie in \( {M}^{ \bot } \), this says
\[
\Lambda \left( h\right) z - h \bot z
\]
so that for every \( h \in \mathcal{H} \) ,
\[
\langle \Lambda \left( h\right) z - h, z\rangle = 0.
\]
Rearranging this last line we see that \( \Lambda \left( h\right) = \langle h, z/\parallel z{\parallel }^{2}\rangle \), which gives the existence statement with \( {h}_{0} = z/\parallel z{\parallel }^{2} \) . Uniqueness is immediate, and since we have already observed that \( \parallel \mathbf{\Lambda }\parallel = \begin{Vmatrix}{h}_{0}\end{Vmatrix} \), we are done.
What does the proof of this result tell you about the relationship between any two vectors in \( {\left( \ker \Lambda \right) }^{ \bot } \) when \( \Lambda \) is a bounded linear functional on \( \mathcal{H} \) ?
Theorem 1.29 says that a Hilbert space is self-dual, i.e., that \( {\mathcal{H}}^{ * } = \mathcal{H} \) in the sense that the map sending \( {h}_{0} \) in \( \mathcal{H} \) to the bounded linear functional \( \left\langle {\cdot ,{h}_{0}}\right\rangle \) is an isometry of \( \mathcal{H} \) onto its dual space ("isometry" referring to the fact that the norm of the linear functional induced by \( {h}_{0} \) is \( \begin{Vmatrix}{h}_{0}\end{Vmatrix} \) ). Notice that we’re not asserting linearity for the identification of \( {h}_{0} \) with the linear function \( \left\langle {\cdot ,{h}_{0}}\right\rangle \) ; why not?
In their 1907 works, Riesz and Fréchet dealt specifically with the Hilbert space \( {L}^{2}\left\lbrack {a, b}\right\rbrack \) . Shortly thereafter, Riesz considered the natural generalization of his work when he investigated the possibility of describing all bounded linear functionals on \( {L}^{p}\left\lbrack {a, b}\right\rbrack \) for \( 1 \leq p < \infty \), launching the study of \( {L}^{p} \) spaces as normed linear spaces. In 1909, Riesz identified the set of all bounded linear functionals on \( {L}^{p}\left\lbrack {a, b}\right\rbrack ,1 \leq p < \infty \)
50. The society's original purpose was to create an analysis text, but this quickly expanded into a project of much bigger scope. A multivolume Éléments de mathématique, now totaling more than 7000 pages and treating many core topics in modern mathematics, has been produced. with \( {L}^{q}\left\lbrack {a, b}\right\rbrack \), where \( 1/p + 1/q = 1 \) (when \( p = 1 \) we set \( q = \infty \) ). The analogous statement for \( {\ell }^{p} \) came a few years earlier, in work of E. Landau; the reader is asked to provide a proof in this case in Exercise 1.17. If we leave the realm of Banach spaces, however, a discussion of bounded linear functionals may become moot. For example, M.M. Day showed in 1940 that there are no continuous linear functionals on \( {L}^{p}\left\lbrack {0,1}\right\rbrack \) for \( 0 < p < 1 \) except the trivial functional (which is identically zero). The spaces \( {L}^{p}\left\lbrack {0,1}\right\rbrack \) for \( 0 < p < 1 \) are discussed in Exercise 1.30; they are not Banach spaces.
Let us return to our example of the Bergman space \( {L}_{a}^{2}\left( \mathbb{D}\right) \) . Observe that Corollary 1.19 says that evaluation at any point \( w \in \mathbb{D} \) is a bounded linear functional on the Hilbert space \( {L}_{a}^{2}\left( \mathbb{D}\right) \) . By Theorem 1.29, evaluation at \( w \) must thus be given by inner product with some fixed vector in \( {L}_{a}^{2}\left( \mathbb{D}\right) \), that is, for each \( w \in \mathbb{D} \) there is a function in \( {L}_{a}^{2}\left( \mathbb{D}\right) \), which we will denote \( {K}_{w}\left( z\right) \), satisfying \( f\left( w\right) = \left\langle {f,{K}_{w}}\right\rangle \) for all \( f \in {L}_{a}^{2}\left( \mathbb{D}\right) \) . Can we identify \( {K}_{w} \) ? This has a nice answer, which is outlined in Exercise 1.25.
Next we’ll interpret the projection theorem when \( \mathcal{H} = {L}^{2}\left( {\mathbb{D},{dA}/\pi }\right) \) and \( M = \) \( {L}_{a}^{2}\left( \mathbb{D}\right) \), the Bergman space. Can we find an explicit formula for the orthogonal projection \( P : {L}^{2}\left( {\mathbb{D},{dA}/\pi }\right) \rightarrow {L}_{a}^{2}\left( \mathbb{D}\right) \) ? A simple lemma will be useful here.
Lemma 1.30. Let \( P : \mathcal{H} \rightarrow M \) be the orthogonal projection of a Hilbert space \( \mathcal{H} \) onto a closed subspace \( M \) of \( \mathcal{H} \) . We have \( \langle f,{Pg}\rangle = \langle {Pf}, g\rangle \) for all vectors \( f \) and \( g \) in \( \mathcal{H} \) .
Proof. Let \( f \) and \( g \) be in \( \mathcal{H} \) and write, using the projection theorem, \( f = {m}_{1} + {n}_{1} \) , \( g = {m}_{2} + {n}_{2} \), where \( {m}_{1},{m}_{2} \in M \) and \( {n}_{1},{n}_{2} \in {M}^{ \bot } \) . We have
\[
\langle f,{Pg}\rangle = \left\langle {{m}_{1} + {n}_{1},{m}_{2}}\right\rangle = \left\langle {{m}_{1},{m}_{2}}\right\rangle
\]
while
\[
\langle {Pf}, g\rangle = \left\langle {{m}_{1},{m}_{2} + {n}_{2}}\right\rangle = \left\langle {{m}_{1},{m}_{2}}\right\rangle .
\]
Returning to our question, if \( f \in {L}^{2}\left( {\mathbb{D},{dA}/\pi }\right) \), then for any \( w \in \mathbb{D} \) ,
\[
{Pf}\left( w\right) = \left\langle {{Pf},{K}_{w}}\right\rangle = \left\langle {f, P{K}_{w}}\right\rangle = \left\langle {f,{K}_{w}}\right\rangle = {\int }_{\mathbb{D}}f\left( z\right) \overline{{K}_{w}\left( z\right) }\frac{dA}{\pi },
\]
where \( {K}_{w} \) is the vector in \( {L}_{a}^{2}\left( \mathbb{D}\right) \) that gives the linear functional of evaluation at \( w \), and we have used the lemma for the second equality. Since by Exercise 1.25 \( {K}_{w}\left( z\right) = {\left( 1 - \bar{w}z\right) }^{-2} \), this gives an integral formula for computing the projection \( {Pf} \) .
The Bergman space furnishes an example of what are called functional Banach spaces. Here is the definition: A Banach space \( X \) consisting of scalar-valued functions on a set \( S \) is a functional Banach space if point evaluation \( {e}_{s}\left( f\right) \equiv f\left( s\right) \) at each point \( s \) of \( S \) is a bounded linear functional on \( X \), and if no evaluation functional \( {e}_{s} \) is identically 0 . Other examples of functional Banach spaces, besides \( {L}_{a}^{2}\left( \mathbb{D}\right) \) , include \( C\left\lbrack {0,1}\right\rbrack \) in the supremum norm and \( {\ell }^{p} \) for \( 1 \leq p \leq \infty \) . A non-example is \( {L}^{p}\left( {\left\lbrack {0,1}\right\rbrack ,{dx}}\right) ,1 \leq p \leq \infty \) ; here the vectors are equivalence classes of functions, and evaluation at a point of \( \left\lbrack {0,1}\right\rbrack \) doesn’t even make sense.
## 1.5 Orthonormal Bases
Definition 1.31. An orthonormal set in a Hilbert space \( \mathcal{H} \) is a set \( \mathcal{E} \) with the properties:
(1) for every \( e \in \mathcal{E},\parallel e\parallel = 1 \), and
(2) for distinct vectors \( e \) and \( f \) in \( \mathcal{E},\langle e, f\rangle = 0 \) .
For an easy example of an orthonormal set in the Hilbert space \( {\ell }^{2} \), take the set \( \mathcal{E} \) of vectors \( {e}_{j}, j \geq 1 \) where \( {e}_{j} \) has a 1 in the \( j \) th coordinate and zeros elsewhere. As a second example, consider the Hilbert space \( {L}^{2}\left\lbrack {0,{2\pi }}\right\rbrack \), with respect to normalized Lebesgue measure \( {dt}/\left( {2\pi }\right) \) . The collection of functions \( {e}^{int} \) for any integer \( n \) form an orthonormal set in this Hilbert space. We often will write \( {L}^{2}\left( T\right) \) for \( {L}^{2}\left( {\left\lbrack {0,{2\pi }}\right\rbrack ,{dt}/\left( {2\pi }\right) }\right) \), where \( T \) denotes the unit circle and we are identifying a function on \( \left\lbrack {0,{2\pi }}\right\rbrack \) with a function on \( T \) by \( f\left( t\right) = f\left( {e}^{it}\right) \) .
Definition 1.32. An orthonormal basis for a Hilbert space \( \mathcal{H} \) is a maximal orthonormal set; that is, an orthonormal set that is not properly contained in any orthonormal set.
It is easy to see that in the \( {\ell }^{2} \) example above, the set \( \left\{ {{e}_{j} : j \geq 1}\right\} \) is an orthonormal basis. Harder, but still true, is that \( \left\{ {{e}^{int} : n \in \mathbb{Z}}\right\} \), where \( \mathbb{Z} \) is the set of all integers and \( {e}^{int} = \cos \left( {nt}\right) + i\sin \left( {nt}\right) \), is an orthonormal basis for \( {L}^{2}\left( T\right) \) . This result is a consequence of Fejér's theorem; for a proof the reader is referred to [48]. Every Hilbert space has an orthonormal basis (see Exercise 3.1 in Chapter 3). The proof of this statement uses Zorn's lemma, which will be discussed in Section 3.1. The Hilbert spaces of principal interest to us will either have a finite or countably infinite orthonormal basis.
A Hilbert space is also a vector s
|
Theorem 1.29. Every bounded linear functional \( \Lambda \) on a Hilbert space \( \mathcal{H} \) is given by inner product with a (unique) fixed vector \( {h}_{0} \) in \( \mathcal{H} : \Lambda \left( h\right) = \left\langle {h,{h}_{0}}\right\rangle \) . Moreover, the norm of the linear functional \( \Lambda \) is \( \begin{Vmatrix}{h}_{0}\end{Vmatrix} \) .
|
Proof. Suppose \( \Lambda \) is a bounded linear functional on \( \mathcal{H} \). If \( \Lambda \) is identically 0, choose \( {h}_{0} = 0 \). Otherwise, set
\[
M = \ker \Lambda \equiv \{ h \in \mathcal{H} : \Lambda \left( h\right) = 0\} .
\]
Since \( \Lambda \) is linear, \( M \) is a subspace of \( \mathcal{H} \), and since \( \Lambda \) is continuous, \( M = {\Lambda }^{-1}\left( 0\right) \) is closed. Note that \( M \neq \mathcal{H} \) since we are assuming \( \Lambda \neq 0 \). Pick a nonzero vector \( z \in {M}^{ \bot } \) . By scaling if necessary we may assume \( \Lambda \left( z\right) = 1 \) . Consider, for arbitrary \( h \in \mathcal{H} \), the vector \( \Lambda \left( h\right) z - h \) and observe that if we apply \( \Lambda \) to this vector we get 0, i.e., it lies in \( M \) . Since \( z \) was chosen to lie in \( {M}^{ \bot } \), this says
\[
\Lambda \left( h\right) z - h \bot z
\]
so that for every \( h \in \mathcal{H} \) ,
\[
\langle \Lambda \left( h\right) z - h, z\rangle = 0.
\]
Rearranging this last line we see that \( \(\Lambda (h) = \(\langle h, z/\parallel z{\parallel }^{2}\rangle\), which gives the existence statement with \(\({h}_{0} = z/\parallel z{\parallel }^{2}\) . Uniqueness is immediate, and since we have already observed that \(\(\parallel \(\mathbf{\Lambda }\)\parallel = \(\begin{Vmatrix}{h}_{0}\end{Vmatrix}\)\), we are done.
|
Corollary 1.4. Theorem 1.1 follows from Theorem 1.3.
Proof. By the mean value theorem there exists \( \bar{x} \in \left( {a,{s}_{n - 1}}\right) \) such that
\[
{\int }_{a}^{{s}_{n - 1}}{f}^{\left( n\right) }\left( {s}_{n}\right) d{s}_{n} = {f}^{\left( n\right) }\left( \bar{x}\right) \left( {{s}_{n - 1} - a}\right) = {\int }_{a}^{{s}_{n - 1}}{f}^{\left( n\right) }\left( \bar{x}\right) d{s}_{n}.
\]
The proof is completed by substituting this in the iterated integral in the statement of Theorem 1.3 and using (1.3).
Next, we give Taylor's formula in Cauchy's form.
Theorem 1.5. Let \( f \) satisfy the conditions of Theorem 1.1. We have
\[
f\left( b\right) = f\left( a\right) + {f}^{\prime }\left( a\right) \left( {b - a}\right) + \frac{{f}^{\prime \prime }\left( a\right) }{2}{\left( b - a\right) }^{2} + \cdots + \frac{{f}^{\left( n - 1\right) }\left( a\right) }{\left( {n - 1}\right) !}{\left( b - a\right) }^{n - 1}
\]
\[
+ \frac{1}{\left( {n - 1}\right) !}{\int }_{a}^{b}{f}^{\left( n\right) }\left( x\right) {\left( b - x\right) }^{n - 1}{dx}.
\]
(1.4)
Proof. The domain of the iterated integral in the statement of Theorem 1.3 is \( \left\{ {\left( {{s}_{1},\ldots ,{s}_{n}}\right) : a \leq {s}_{n} \leq {s}_{n - 1} \leq \cdots \leq {s}_{1} \leq b}\right\} \) . By Fubini’s theorem, this integral can be written as
\[
{\int }_{a}^{b}{f}^{\left( n\right) }\left( {s}_{n}\right) {\int }_{{s}_{n}}^{b}\cdots {\int }_{{s}_{2}}^{b}d{s}_{1}\cdots d{s}_{n - 1}d{s}_{n}
\]
(1.5)
We claim that
\[
{\int }_{{s}_{k}}^{b}\cdots {\int }_{{s}_{2}}^{b}d{s}_{1}\cdots d{s}_{k - 1} = \frac{{\left( b - {s}_{k}\right) }^{k - 1}}{\left( {k - 1}\right) !}.
\]
Indeed, this is trivial to check for \( k = 1 \) . If it holds for \( k \), then
\[
{\int }_{{s}_{k + 1}}^{b}\cdots {\int }_{{s}_{2}}^{b}d{s}_{1}\cdots d{s}_{k}d{s}_{k + 1} = {\int }_{{s}_{k + 1}}^{b}\frac{{\left( b - {s}_{k}\right) }^{k - 1}}{\left( {k - 1}\right) !}d{s}_{k} = \frac{{\left( b - {s}_{k + 1}\right) }^{k}}{k!},
\]
where the first equality follows from the induction hypothesis. This proves the claim for \( k + 1 \) . Then the integral (1.5) becomes \( {\int }_{a}^{b}{f}^{\left( n\right) }\left( {s}_{n}\right) {\left( b - {s}_{n}\right) }^{n - 1}/(n - \) 1)! \( d{s}_{n} \), and coincides with the one in (1.4). The theorem is proved.
We now give a simple demonstration of the use of Taylor's formula in optimization. Suppose that \( {x}^{ * } \) is a critical point, that is, \( {f}^{\prime }\left( {x}^{ * }\right) = 0 \) . The quadratic Taylor's formula gives
\[
f\left( x\right) = f\left( {x}^{ * }\right) + {f}^{\prime }\left( {x}^{ * }\right) \left( {x - {x}^{ * }}\right) + \frac{{f}^{\prime \prime }\left( \bar{x}\right) }{2}{\left( x - {x}^{ * }\right) }^{2},
\]
for some \( \bar{x} \) strictly between \( x \) and \( {x}^{ * } \) . Now, if \( {f}^{\prime \prime }\left( \bar{x}\right) \geq 0 \) for all \( \bar{x} \in I \), then
\[
f\left( x\right) \geq f\left( {x}^{ * }\right) \text{ for all }x \in I.
\]
This shows that \( {x}^{ * } \) is a global minimizer of \( f \) . A function \( f \) with \( {f}^{\prime \prime }\left( x\right) \) nonnegative at all points is a convex function. If \( {f}^{\prime \prime }\left( \bar{x}\right) \geq 0 \) only in a neighborhood of \( {x}^{ * } \), then \( {x}^{ * } \) is a local minimizer of \( f \) . If \( {f}^{\prime }\left( {x}^{ * }\right) = 0 \) and \( {f}^{\prime \prime }\left( x\right) \leq 0 \) for all \( x \), then \( f\left( x\right) \leq f\left( {x}^{ * }\right) \), for all \( x \), that is, \( {x}^{ * } \) is a global maximizer of \( f \) . Such a function \( f \) is a concave function. Chapter 4 treats convex and concave (not necessarily differentiable) functions in detail.
## 1.2 Differentiation of Functions of Several Variables
Definition 1.6. Let \( f : U \rightarrow \mathbb{R} \) be a function on an open set \( U \subseteq {\mathbb{R}}^{n} \) . If \( x \in U \), the limit
\[
\frac{\partial f}{\partial {x}_{i}}\left( x\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{f\left( {{x}_{1},\ldots ,{x}_{i - 1},{x}_{i} + t,{x}_{i + 1},\ldots ,{x}_{n}}\right) - f\left( x\right) }{t},
\]
if it exists, is called the partial derivative of \( f \) at \( x \) with respect to \( {x}_{i} \) . If all the partial derivatives exist, then the vector
\[
\nabla f\left( x\right) \mathrel{\text{:=}} {\left( \partial f/\partial {x}_{1},\ldots ,\partial f/\partial {x}_{n}\right) }^{T}
\]
is called the gradient of \( f \) .
Let \( d \in {\mathbb{R}}^{n} \) be a vector \( d = {\left( {d}_{1},\ldots ,{d}_{n}\right) }^{T} \) . Denoting by \( {e}_{i} \) the \( i \) th coordinate
vector
\[
{e}_{i} \mathrel{\text{:=}} {\left( 0,\ldots ,1,0,\ldots ,0\right) }^{T},
\]
where the only nonzero entry 1 is in the \( i \) th position, we have
\[
d = {d}_{1}{e}_{1} + \cdots + {d}_{n}{e}_{n}
\]
Definition 1.7. The directional derivative of \( f \) at \( x \in U \) along the direction \( d \in {\mathbb{R}}^{n} \) is
\[
{f}^{\prime }\left( {x;d}\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{t \searrow 0}}\frac{f\left( {x + {td}}\right) - f\left( x\right) }{t},
\]
provided the limit on the right-hand side exists as \( t \geq 0 \) approaches zero.
Clearly, \( {f}^{\prime }\left( {x;{\alpha d}}\right) = \alpha {f}^{\prime }\left( {x;d}\right) \) for \( \alpha \geq 0 \), and we note that if \( {f}^{\prime }\left( {x; - d}\right) = \) \( - {f}^{\prime }\left( {x;d}\right) \), then we have
\[
{f}^{\prime }\left( {x;d}\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{f\left( {x + {td}}\right) - f\left( x\right) }{t}
\]
because
\[
{f}^{\prime }\left( {x;d}\right) = - {f}^{\prime }\left( {x, - d}\right) = - \mathop{\lim }\limits_{{t \searrow 0}}\frac{f\left( {x - {td}}\right) - f\left( x\right) }{t} = \mathop{\lim }\limits_{{s \nearrow 0}}\frac{f\left( {x + {sd}}\right) - f\left( x\right) }{s}.
\]
Definition 1.8. A function \( f : U \rightarrow \mathbb{R} \) is Gâteaux differentiable at \( x \in U \) if the directional derivative \( {f}^{\prime }\left( {x;d}\right) \) exists for all directions \( d \in {\mathbb{R}}^{n} \) and is a linear function of \( d \) .
Let \( f \) be Gâteaux differentiable at \( x \) . Since \( d = {d}_{1}{e}_{1} + \cdots + {d}_{n}{e}_{n} \), and \( {f}^{\prime }\left( {x;d}\right) \) is linear in \( d \), we have
\[
{f}^{\prime }\left( {x;d}\right) = {f}^{\prime }\left( {x;{d}_{1}{e}_{1} + \cdots + {d}_{n}{e}_{n}}\right) = {d}_{1}{f}^{\prime }\left( {x;{e}_{1}}\right) + \cdots + {d}_{n}{f}^{\prime }\left( {x;{e}_{n}}\right)
\]
\[
= \langle d,\nabla f\left( x\right) \rangle = {d}^{T}\nabla f\left( x\right) .
\]
Definition 1.9. The function \( f : U \rightarrow \mathbb{R} \) is Fréchet differentiable at the point \( x \in U \) if there exists a linear function \( \ell : {\mathbb{R}}^{n} \rightarrow \mathbb{R},\ell \left( x\right) = \langle l, x\rangle \), such that
\[
\mathop{\lim }\limits_{{\parallel h\parallel \rightarrow 0}}\frac{f\left( {x + h}\right) - f\left( x\right) -\langle l, h\rangle }{\parallel h\parallel } = 0.
\]
(1.6)
Intuitively speaking, this means that the function \( f \) can be "well approximated" around \( x \) by an affine function \( h \rightarrow f\left( x\right) + \langle l, h\rangle \), that is,
\[
f\left( {x + h}\right) \approx f\left( x\right) + \langle l, h\rangle .
\]
This approximate equation can be made precise using Landau's little "oh" notation, where we call a vector \( o\left( h\right) \in {\mathbb{R}}^{n} \) if
\[
\mathop{\lim }\limits_{{h \rightarrow 0}}\frac{\parallel o\left( h\right) \parallel }{\parallel h\parallel } = 0
\]
With this notation, the Fréchet differentiability of \( f \) at \( x \) is equivalent to stating \( f\left( {x + h}\right) - f\left( x\right) - \langle l, h\rangle = o\left( h\right) \), or
\[
f\left( {x + h}\right) = f\left( x\right) + \langle l, h\rangle + o\left( h\right) .
\]
(1.7)
The \( o\left( h\right) \) notation is very intuitive and convenient to use in proofs involving limits.
Clearly, if \( f \) is Fréchet differentiable at \( x \), then it is continuous at \( x \), because (1.7) implies that \( \mathop{\lim }\limits_{{h \rightarrow 0}}f\left( {x + h}\right) = f\left( x\right) \) .
The vector \( l \) in the definition of Fréchet differentiability can be calculated explicitly. Choosing \( h = t{e}_{i}\left( {i = 1,\ldots, n}\right) \) in (1.6) gives
\[
\mathop{\lim }\limits_{{t \rightarrow 0}}\frac{f\left( {x + t{e}_{i}}\right) - f\left( x\right) - t{l}_{i}}{t} = 0.
\]
We have \( {l}_{i} = \partial f\left( x\right) /\partial {x}_{i} \), and thus
\[
l = \nabla f\left( x\right)
\]
(1.8)
Then (1.7) becomes
\[
f\left( {x + h}\right) = f\left( x\right) + \langle \nabla f\left( x\right), h\rangle + o\left( h\right) .
\]
This also gives us the following Theorem.
Theorem 1.10. If \( U \subseteq {\mathbb{R}}^{n} \) is open and \( f : U \rightarrow \mathbb{R} \) is Fréchet differentiable at \( x \), then \( f \) is Gâteaux differentiable at \( x \) .
Thus, Fréchet differentiability implies Gâteaux differentiability, but the converse is not true; see the exercises at the end of the chapter. Consequently, Fréchet differentiability is a stronger concept than Gâteaux differentiability. In fact, the former concept is a uniform version of the latter: it is not hard to see that \( f \) is Fréchet differentiable at \( x \) if and only if \( f \) is Gâteaux differentiable
and the limit
\[
\mathop{\lim }\limits_{{t \rightarrow 0}}\frac{f\left( {x + {td}}\right) - f\left( x\right) -\langle \nabla f\left( x\right), d\rangle }{t}
\]
converges uniformly to zero for all \( \parallel d\parallel \leq 1 \), that is, given \( \varepsilon > 0 \), there exists \( \delta > 0 \) such that
\[
\begin{Vmatrix}\frac{f\left( {x + {td}}\right) - f\left( x\right) -\langle \nabla f\left( x\right), d\rangle }{t}\end{Vmatrix} < \varepsilon
\]
for all \( \left| t\right| < \delta \) and for all \( \parallel d\parallel \leq 1 \) .
|
Corollary 1.4. Theorem 1.1 follows from Theorem 1.3.
|
By the mean value theorem there exists \( \bar{x} \in \left( {a,{s}_{n - 1}}\right) \) such that
\[
{\int }_{a}^{{s}_{n - 1}}{f}^{\left( n\right) }\left( {s}_{n}\right) d{s}_{n} = {f}^{\left( n\right) }\left( \bar{x}\right) \left( {{s}_{n - 1} - a}\right) = {\int }_{a}^{{s}_{n - 1}}{f}^{\left( n\right) }\left( \bar{x}\right) d{s}_{n}.
\]
The proof is completed by substituting this in the iterated integral in the statement of Theorem 1.3 and using (1.3).
|
Lemma 21.6. Let \( V \) be a one-dimensional analytic subvariety of an open subset of \( {\mathbb{C}}^{n} \) . Then
\[
\operatorname{area}\left( V\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\text{ area-with-multiplicity }\left( {{z}_{j}\left( V\right) }\right) .
\]
Here, by the area of \( V \) we understand the usual area of the set \( {V}_{reg} \) of regular points of \( V \) viewed as a two-dimensional real submanifold in \( {\mathbb{R}}^{2n} \) . This agrees with \( {\mathcal{H}}^{2}\left( V\right) \), where \( {\mathcal{H}}^{2} \) denotes two-dimensional Hausdorff measure in \( {\mathbb{C}}^{n} \) . See the Appendix for references to Hausdorff measure. The natural proof of this formula for varieties of arbitrary dimension \( k \) involves considering the form \( {\omega }^{k} \), where \( \omega = i/2\mathop{\sum }\limits_{j}d{z}_{j} \land d{\bar{z}}_{j} \) . We shall give a more classical proof in the case \( k = 1 \) .
Proof. This is a local result and so we can assume that \( V \) can be parameterized by a one-one analytic map \( f : W \rightarrow V \subseteq {\mathbb{C}}^{n} \), where \( W \) is a domain in the complex plane. Let \( \zeta \in W \) be \( \zeta = s + {it} \) and let \( f = \left( {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right) \), where \( {f}_{k} = \) \( {u}_{k} + i{v}_{k} \) . Set \( X\left( {s, t}\right) = f\left( \zeta \right) \) and view \( X \) as a map of \( W \) into \( {\mathbb{R}}^{2n} \) . The classical formula for the area of the map \( X \) is \( {\int }_{W}\left| {X}_{s}\right| \left| {X}_{t}\right| \sin \left( \theta \right) {dsdt} \), where \( \theta \) is the angle between \( {X}_{s} \) and \( {X}_{t} \) . We have \( {X}_{s} = \left( {{f}_{1}^{\prime },{f}_{2}^{\prime },\ldots ,{f}_{n}^{\prime }}\right) \) and \( {X}_{t} = \left( {i{f}_{1}^{\prime }, i{f}_{2}^{\prime },\ldots, i{f}_{n}^{\prime }}\right) \) by the Cauchy-Riemann equations. Hence \( \left| {X}_{t}\right| = \left| {X}_{s}\right| \) and the vectors \( {X}_{s} \) and \( {X}_{t} \) are orthogonal in \( {\mathbb{R}}^{2n} \), and so \( \sin \left( \theta \right) \equiv 1 \) . Thus the previous formula for the area of the image of \( X \) becomes \( \operatorname{area}\left( V\right) = {\int }_{W}\left( {\mathop{\sum }\limits_{j}{\left| {f}_{j}^{\prime }\right| }^{2}}\right) {dsdt} = \mathop{\sum }\limits_{j}{\int }_{W}{\left| {f}_{j}^{\prime }\right| }^{2}{dsdt} \) . Since \( {\int }_{W}{\left| {f}_{j}^{\prime }\right| }^{2}{dsdt} \) is the area-with-multiplicity of \( {f}_{j}\left( W\right) \), this completes the proof.
Let \( V \) be a one-dimensional analytic subvariety of an open set \( \Omega \) in \( {\mathbb{C}}^{n} \) . Let \( p \in V \) and suppose that the closure of \( B\left( {p, r}\right) \) (the open ball of radius \( r \) centered at \( p) \) is contained in \( \Omega \) . Then it is a theorem of Rutishauser that the area of \( V \cap B\left( {p, r}\right) \) is bounded below by \( \pi {r}^{2} \) . Our next result generalizes this by showing that the lower bound \( \pi {r}^{2} \) for the "area of \( V \cap B\left( {p, r}\right) \) ," which, by Lemma 21.6, equals the sum of the areas-with-multiplicity of the \( n \) coordinate projections, is in fact a lower bound for a smaller quantity, the "the sum of the areas (without multiplicity) of the coordinate projections." We shall prove this more generally for polynomial hulls. The connection between hulls and varieties is given by the following lemma.
Lemma 21.7. Let \( V \) be a \( k \) -dimensional analytic subvariety of an open set \( \Omega \) in \( {\mathbb{C}}^{n} \) . Suppose that the closure of the ball \( B\left( {p, r}\right) \) is contained in \( \Omega \) . Let \( X = \) \( V \cap {bB}\left( {p, r}\right) \) . Then \( \widehat{X} \cap B\left( {p, r}\right) = V \cap B\left( {p, r}\right) \) .
Now Rutishauer's result [Rut] is a consequence of the following fact about polynomial hulls.
Theorem 21.8. Let \( X \) be a compact subset of \( {\mathbb{C}}^{n} \) . Suppose that \( p \in \widehat{X} \) and that \( B\left( {p, r}\right) \subseteq \widehat{X} \smallsetminus X \) . Then \( \mathop{\sum }\limits_{{j = 1}}^{n}{\mathcal{H}}^{2}\left( {{z}_{j}\left( {\widehat{X} \cap B\left( {p, r}\right) }\right) \geq \pi {r}^{2}}\right. \) .
In general, polynomial hulls are not analytic sets, but the following shows that, locally, hulls that are not analytic sets have large area.
Theorem 21.9. Let \( X \) be a compact subset of \( {\mathbb{C}}^{n} \) . Suppose that \( p \in \widehat{X} \smallsetminus X \) and that, for some neighborhood \( N \) of \( p \) in \( {\mathbb{C}}^{n},{\mathcal{H}}^{2}\left( {\widehat{X} \cap N}\right) < \infty \) . Then \( \widehat{X} \cap N \) is a one-dimensional analytic subvariety of \( N \) .
This yields another generalization of Rutishauser's result to polynomial hulls.
Corollary 21.10. Let \( X \) be a compact subset of \( {\mathbb{C}}^{n} \) . Suppose that \( p \in \widehat{X} \) and that \( B\left( {p, r}\right) \subseteq {\mathbb{C}}^{n} \smallsetminus X \) . Then \( {\mathcal{H}}^{2}\left( {\widehat{X} \cap B\left( {p, r}\right) }\right) \geq \pi {r}^{2} \) .
Proof of the Corollary. If \( {\mathcal{H}}^{2}\left( {\widehat{X} \cap B\left( {p, r}\right) }\right) < \infty \), then the theorem implies that \( \widehat{X} \cap B\left( {p, r}\right) \) is a one-dimensional analytic set and so Rutishauser’s theorem applies. If \( {\mathcal{H}}^{2}\left( {\widehat{X} \cap B\left( {p, r}\right) }\right) \) is infinite, the conclusion is obvious.
Proof of Lemma 21.7. By the maximum principle, it follows that \( V \cap B\left( {p, r}\right) \subseteq \) \( \widehat{X} \) .
Conversely, suppose that \( q \in B\left( {p, r}\right) \smallsetminus V \) . Let \( {r}^{\prime } > r \) be such that \( B\left( {p,{r}^{\prime }}\right) \subseteq \) \( \Omega \) . There exists a function \( F \) that is holomorphic on \( B\left( {p,{r}^{\prime }}\right) \) such that \( F\left( q\right) = 1 \) and \( F \equiv 0 \) on \( V \cap B\left( {p,{r}^{\prime }}\right) \) —this is by the solution to the second Cousin problem on \( B\left( {p,{r}^{\prime }}\right) \) . In particular, \( F \equiv 0 \) on \( X \) . Approximating \( F \) uniformly on \( \bar{B}\left( {p, r}\right) \) by polynomials with the Taylor series, it follows that \( q \notin \widehat{X} \) . This shows that \( B\left( {p, r}\right) \smallsetminus V \subseteq B\left( {p, r}\right) \smallsetminus \widehat{X} \) . Hence we have the reverse inclusion \( \widehat{X} \cap B\left( {p, r}\right) \subseteq \) \( V \cap B\left( {p, r}\right) \) . This gives the lemma.
Proof of Theorem 21.8. We may, without loss of generality, suppose that \( p = 0 \) . Take \( s < r \) . Set \( Z = X \cap {bB}\left( {p, s}\right) \) . By the local maximum modulus theorem, it follows that \( \widehat{Z} = \widehat{X} \cap \bar{B}\left( {p, s}\right) \) . We let \( \mathcal{A} \) be the uniform closure of the polynomials in \( C\left( Z\right) \) . Then the maximal ideal space \( M \) of \( \mathcal{A} \) is just \( \widehat{Z} = \widehat{X} \cap \bar{B}\left( {p, s}\right) \) . Then, for \( f \in \mathcal{A}, f \mapsto f\left( 0\right) \) is a continuous homomorphism \( \phi \) on \( \mathcal{A} \) represented by a measure \( \mu \) on \( Z \) . The coordinate function \( {z}_{k} \) belongs to \( \mathcal{A} \) with \( \phi \left( {z}_{k}\right) = 0 \), and so by Theorem 21.1 we have \( \pi \int {\left| {z}_{k}\right| }^{2}{d\mu } \leq {\mathcal{H}}^{2}\left( {{z}_{k}\left( {\widehat{X} \cap \bar{B}\left( {p, s}\right) }\right) }\right. \) . Now we sum over \( k \) and use the fact that \( \mathop{\sum }\limits_{{k = 1}}^{n}{\left| {z}_{k}\right| }^{2} \equiv {s}^{2} \) on \( Z \) to get \( \pi {s}^{2} \leq \mathop{\sum }\limits_{{k = 1}}^{n}{\mathcal{H}}^{2}\left( {{z}_{k}(\widehat{X} \cap }\right. \) \( \bar{B}\left( {p, s}\right) ) \) . Letting \( s \) increase to \( r \) now gives the theorem.
Proof of Theorem 21.9. Without loss of generality we may suppose that \( p = 0 \) . Consider complex hyperplanes though the origin. Since \( {\mathcal{H}}^{2}\left( {\widehat{X} \cap N}\right) < \infty \), there is a complex hyperplane \( H \) such that \( {\mathcal{H}}^{1}\left( {\widehat{X} \cap N \cap H}\right) = 0 \) . (See the Appendix on Hausdorff measure.) We may suppose that \( H \) is the hyperplane \( \left\{ {{z}_{n} = 0}\right\} \) . We write \( z = \left( {{z}^{\prime },{z}_{n}}\right) \) for \( z \in {\mathbb{C}}^{n} \) with \( {z}^{\prime } \in {\mathbb{C}}^{n - 1} \) . The fact that \( {\mathcal{H}}^{1}\left( {\widehat{X} \cap N \cap H}\right) = 0 \) implies (Appendix) that \( \widehat{X} \cap N \) is disjoint from \( \left\{ {\left( {{z}^{\prime },{z}_{n}}\right) : \begin{Vmatrix}{z}^{\prime }\end{Vmatrix} = \delta ,{z}_{n} = 0}\right\} \) for almost all \( \delta > 0 \) .
Fix \( \delta > 0 \) such that \( B\left( {0,\delta }\right) \subset N \) and \( \widehat{X} \cap N \) is disjoint from \( \left\{ \left( {{z}^{\prime },{z}_{n}}\right) \right. \) : \( \left. {\left| \right| {z}^{\prime }\left| \right| = \delta ,{z}_{n} = 0}\right\} \) . Then there exists \( \epsilon > 0 \) such that \( \widehat{X} \cap N \) is disjoint from \( \left\{ {\left( {{z}^{\prime },{z}_{n}}\right) : \left| \right| {z}^{\prime }\left| \right| = \delta ,\left| {z}_{n}\right| \leq \epsilon }\right\} \) . Let \( \Delta = \left\{ {\left( {{z}^{\prime },{z}_{n}}\right) : \left| \right| {z}^{\prime }\left| \right| < \delta \text{and}\left| {z}_{n}\right| < \epsilon }\right\} \) and set \( \rho \left( z\right) = {z}_{n} \) . We can assume, by choosing \( \epsilon \) small enough, that \( \bar{\Delta } \subseteq N \) . Note that \( \left( {\widehat{X} \cap N}\right) \cap {b\Delta } \) is contained in the set where \( \left\{ {\left| {z}_{n}\right| = \epsilon }\right\} \) . This is because \( \widehat{X} \cap N \) is disjoint from \( \left\{ {\left( {{z}^{\prime },{z}_{n}}\right) : \begin{Vmatrix}{z}^{\prime }\end{Vm
|
Lemma 21.6. Let \( V \) be a one-dimensional analytic subvariety of an open subset of \( {\mathbb{C}}^{n} \) . Then
\[
\operatorname{area}\left( V\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\text{ area-with-multiplicity }\left( {{z}_{j}\left( V\right) }\right) .
\]
Here, by the area of \( V \) we understand the usual area of the set \( {V}_{reg} \) of regular points of \( V \) viewed as a two-dimensional real submanifold in \( {\mathbb{R}}^{2n} \) . This agrees with \( {\mathcal{H}}^{2}\left( V\right) \), where \( {\mathcal{H}}^{2} \) denotes two-dimensional Hausdorff measure in \( {\mathbb{C}}^{n} \) . See the Appendix for references to Hausdorff measure. The natural proof of this formula for varieties of arbitrary dimension \( k \) involves considering the form \( {\omega }^{k} \), where \( \omega = i/2\mathop{\sum }\limits_{j}d{z}_{j} \land d{\bar{z}}_{j} \) . We shall give a more classical proof in the case \( k = 1 \) .
|
This is a local result and so we can assume that \( V \) can be parameterized by a one-one analytic map \( f : W \rightarrow V \subseteq {\mathbb{C}}^{n} \), where \( W \) is a domain in the complex plane. Let \( \zeta \in W \) be \( \zeta = s + {it} \) and let \( f = \left( {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right) \), where \( {f}_{k} = \) \( {u}_{k} + i{v}_{k} \) . Set \( X\left( {s, t}\right) = f\left( \zeta \right) \) and view \( X \) as a map of \( W \) into \( {\mathbb{R}}^{2n} \) . The classical formula for the area of the map \( X \) is \( {\int }_{W}\left| {X}_{s}\right| \left| {X}_{t}\right| \sin \left( \theta \right) {dsdt} \), where \( \theta \) is the angle between \( {X}_{s} \) and \( {X}_{t} \) . We have \( {X}_{s} = \left( {{f}_{1}^{\prime },{f}_{2}^{\prime },\ldots ,{f}_{n}^{\prime }}\right) \) and \( {X}_{t} = \left( {i{f}_{1}^{\prime }, i{f}_{2}^{\prime },\ldots, i{f}_{n}^{\prime }}\right) \) by the Cauchy-Riemann equations. Hence \( \left| {X}_{t}\right| = \left| {X}_{s}\right| \) and the vectors \( {X}_{s} \) and \( {X}_{t} \) are orthogonal in \( {\mathbb{R}}^{2n} \), and so \( \sin \left( \theta \right) \equiv 1 \) . Thus the previous formula for the area of the image of \( X \) becomes \( \operatorname{area}\left( V\right) = {\int }_{W}\left( {\mathop{\sum }\limits_{j}{\left| {f}_{j}^{\prime }\right| }^{2}}\right) {dsdt} = \mathop{\sum }\limits_{j}{\int }_{W}{\left| {f}_{j}^{\prime }\right| }^{2}{dsdt} \) . Since \( {\int }_{W}{\left| {f}_{j}^{\prime }\right| }^{2}{dsdt} \) is the area-with-multiplicity of \( {f}_{j}\left( W\right) \), this completes the proof.
|
Corollary 1. Every irreducible representation \( {\mathrm{W}}_{i} \) is contained in the regular representation with multiplicity equal to its degree \( {n}_{i} \) .
According to th. 4, this number is equal to \( \left\langle {{r}_{\mathrm{G}},{\chi }_{i}}\right\rangle \), and we have
\[
\left\langle {{r}_{\mathrm{G}},{\chi }_{i}}\right\rangle = \frac{1}{g}\mathop{\sum }\limits_{{s \in \mathrm{G}}}{r}_{\mathrm{G}}\left( {s}^{-1}\right) {\chi }_{i}\left( s\right) = \frac{1}{g}g \cdot {\chi }_{i}\left( 1\right) = {\chi }_{i}\left( 1\right) = {n}_{i}.
\]
Corollary 2.
(a) The degrees \( {n}_{i} \) satisfy the relation \( \mathop{\sum }\limits_{{i = 1}}^{{i = h}}{n}_{i}^{2} = g \) .
(b) If \( s \in \mathbf{G} \) is different from 1, we have \( \mathop{\sum }\limits_{{i = 1}}^{{i = h}}{n}_{i}{\chi }_{i}\left( s\right) = 0 \) .
By cor. 1, we have \( {r}_{\mathrm{G}}\left( s\right) = \sum {n}_{i}{\chi }_{i}\left( s\right) \) for all \( s \in \mathrm{G} \) . Taking \( s = 1 \) we obtain (a), and taking \( s \neq 1 \), we obtain (b).
## Remarks
(1) The above result can be used in determining the irreducible representations of a group \( \mathrm{G} \) : suppose we have constructed some mutually nonisomorphic irreducible representations of degrees \( {n}_{1},\ldots ,{n}_{k} \) ; in order that they be all the irreducible representations of \( \mathrm{G} \) (up to isomorphism), it is necessary and sufficient that \( {n}_{1}^{2} + \cdots + {n}_{k}^{2} = g \) .
(2) We will see later (Part II,6.5) another property of the degrees \( {n}_{i} \) : they divide the order \( g \) of \( \mathrm{G} \) .
## EXERCISE
2.7. Show that each character of \( \mathrm{G} \) which is zero for all \( s \neq 1 \) is an integral multiple of the character \( {r}_{\mathrm{G}} \) of the regular representation.
## 2.5 Number of irreducible representations
Recall (cf. 2.1) that a function \( f \) on \( \mathrm{G} \) is called a class function if \( f\left( {{ts}{t}^{-1}}\right) = f\left( s\right) \) for all \( s, t \in \mathrm{G} \) .
Proposition 6. Let \( f \) be a class function on \( \mathbf{G} \), and let \( \rho : \mathbf{G} \rightarrow \mathbf{{GL}}\left( \mathbf{V}\right) \) be a linear representation of \( \mathrm{G} \) . Let \( {\rho }_{f} \) be the linear mapping of \( \mathrm{V} \) into itself defined by:
\[
{\rho }_{f} = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) {\rho }_{t}
\]
If \( \mathrm{V} \) is irreducible of degree \( n \) and character \( \chi \), then \( {\rho }_{f} \) is a homothety of ratio \( \lambda \) given by:
\[
\lambda = \frac{1}{n}\mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) \chi \left( t\right) = \frac{g}{n}\left( {f \mid {\chi }^{ * }}\right) .
\]
Let us compute \( {\rho }_{s}^{-1}{\rho }_{f}{\rho }_{s} \) . We have:
\[
{\rho }_{s}^{-1}{\rho }_{f}{\rho }_{s} = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) {\rho }_{s}^{-1}{\rho }_{t}{\rho }_{s} = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) {\rho }_{{s}^{-1}{ts}}.
\]
Putting \( u = {s}^{-1}{ts} \), this becomes:
\[
{\rho }_{s}^{-1}{\rho }_{f}{\rho }_{s} = \mathop{\sum }\limits_{{u \in \mathrm{G}}}f\left( {{su}{s}^{-1}}\right) {\rho }_{u} = \mathop{\sum }\limits_{{u \in \mathrm{G}}}f\left( u\right) {\rho }_{u} = {\rho }_{f}.
\]
So we have \( {\rho }_{f}{\rho }_{s} = {\rho }_{s}{\rho }_{f} \) . By the second part of prop. 4, this shows that \( {\rho }_{f} \) is a homothety \( \lambda \) . The trace of \( \lambda \) is \( {n\lambda } \) ; that of \( {\rho }_{f} \) is \( \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) \operatorname{Tr}\left( {\rho }_{t}\right) \) \( = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) \chi \left( t\right) \) . Hence \( \lambda = \left( {1/n}\right) \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) \chi \left( t\right) = \left( {g/n}\right) \left( {f \mid {\chi }^{ * }}\right) \) .
We introduce now the space \( \mathrm{H} \) of class functions on \( \mathrm{G} \) ; the irreducible characters \( {\chi }_{1},\ldots ,{\chi }_{h} \) belong to \( \mathrm{H} \) .
Theorem 6. The characters \( {\chi }_{1},\ldots ,{\chi }_{h} \) form an orthonormal basis of \( \mathrm{H} \) .
Theorem 3 shows that the \( {\chi }_{i} \) form an orthonormal system in H. It remains to prove that they generate \( \mathrm{H} \), and for this it is enough to show that every element of \( \mathrm{H} \) orthogonal to the \( {\chi }_{i}^{ * } \) is zero. Let \( f \) be such an element. For each representation \( \rho \) of \( \mathrm{G} \), put \( {\rho }_{f} = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) {\rho }_{t} \) . Since \( f \) is orthogonal to the \( {\chi }_{i}^{ * } \), prop. 6 above shows that \( {\rho }_{f} \) is zero so long as \( \rho \) is irreducible; from the direct sum decomposition we conclude that \( {\rho }_{f} \) is always zero. Applying this to the regular representation \( \mathrm{R} \) (cf. 2.4) and computing the image of the basis vector \( {e}_{1} \) under \( {\rho }_{f} \), we have
\[
{\rho }_{f}{e}_{1} = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) {\rho }_{t}{e}_{1} = \mathop{\sum }\limits_{{t \in \mathrm{G}}}f\left( t\right) {e}_{t}
\]
Since \( {\rho }_{f} \) is zero, we have \( {\rho }_{f}{e}_{1} = 0 \) and the above formula shows that \( f\left( t\right) = 0 \) for all \( t \in \mathrm{G} \) ; hence \( f = 0 \), and the proof is complete.
Recall that two elements \( t \) and \( {t}^{\prime } \) of \( \mathrm{G} \) are said to be conjugate if there exists \( s \in \mathrm{G} \) such that \( {t}^{\prime } = {st}{s}^{-1} \) ; this is an equivalence relation, which partitions \( \mathbf{G} \) into classes (also called conjugacy classes).
Theorem 7. The number of irreducible representations of \( \mathrm{G} \) (up to isomorphism) is equal to the number of classes of \( \mathbf{G} \) .
Let \( {\mathrm{C}}_{1},\ldots ,{\mathrm{C}}_{k} \) be the distinct classes of \( \mathrm{G} \) . To say that a function \( f \) on \( \mathrm{G} \) is a class function is equivalent to saying that it is constant on each of \( {\mathrm{C}}_{1},\ldots ,{\mathrm{C}}_{k} \) ; it is thus determined by its values \( {\lambda }_{i} \) on the \( {\mathrm{C}}_{i} \), and these can be chosen arbitrarily. Consequently, the dimension of the space \( \mathrm{H} \) of class functions is equal to \( k \) . On the other hand, this dimension is, by th. 6, equal to the number of irreducible representations of \( \mathrm{G} \) (up to isomorphism). The result follows.
Here is another consequence of th. 6:
Proposition 7. Let \( s \in \mathrm{G} \), and let \( c\left( s\right) \) be the number of elements in the conjugacy class of \( s \) .
(a) We have \( \mathop{\sum }\limits_{{i = 1}}^{{i = h}}{\chi }_{i}{\left( s\right) }^{ * }{\chi }_{i}\left( s\right) = g/c\left( s\right) \) .
(b) For \( t \in \mathrm{G} \) not conjugate to \( s \), we have \( \mathop{\sum }\limits_{{i = 1}}^{{i = h}}{\chi }_{i}{\left( s\right) }^{ * }{\chi }_{i}\left( t\right) = 0 \) .
(For \( s = 1 \), this yields cor. 2 to prop. 5.)
Let \( {f}_{s} \) be the function equal to 1 on the class of \( s \) and equal to 0 elsewhere. Since it is a class function, it can, by th. 6, be written
\[
{f}_{s} = \mathop{\sum }\limits_{{i = 1}}^{{i = h}}{\lambda }_{i}{\chi }_{i},\;\text{ with }{\lambda }_{i} = \left( {{f}_{s} \mid {\chi }_{i}}\right) = \frac{c\left( s\right) }{g}{\chi }_{i}{\left( s\right) }^{ * }.
\]
We have then, for each \( t \in \mathrm{G} \) ,
\[
{f}_{s}\left( t\right) = \frac{c\left( s\right) }{g}\mathop{\sum }\limits_{{i = 1}}^{{i = h}}{\chi }_{i}{\left( s\right) }^{ * }{\chi }_{i}\left( t\right)
\]
This gives (a) if \( t = s \), and (b) if \( t \) is not conjugate to \( s \) .
EXAMPLE. Take for \( \mathrm{G} \) the group of permutations of three letters. We have \( g = 6 \), and there are three classes: the element 1, the three transpositions, and the two cyclic permutations. Let \( t \) be a transposition and \( c \) a cyclic permutation. We have \( {t}^{2} = 1,{c}^{3} = 1,{tc} = {c}^{2}t \) ; whence there are just two characters of degree 1: the unit character \( {\chi }_{1} \) and the character \( {\chi }_{2} \) giving the signature of a permutation. Theorem 7 shows that there exists one other irreducible character \( \theta \) ; if \( n \) is its degree we must have \( 1 + 1 + {n}^{2} = 6 \) , hence \( n = 2 \) . The values of \( \theta \) can be deduced from the fact that \( {\chi }_{1} + {\chi }_{2} \) \( + {2\theta } \) is the character of the regular representation of \( \mathrm{G} \) (cf. prop. 5). We thus get the character table of \( \mathrm{G} \) :
<table><thead><tr><th></th><th>1</th><th>\( t \)</th><th>\( c \)</th></tr></thead><tr><td>\( {\chi }_{1} \)</td><td>1</td><td>1</td><td>1</td></tr><tr><td>\( {\chi }_{2} \)</td><td>1</td><td>\( - 1 \)</td><td>1</td></tr><tr><td>\( \theta \)</td><td>2</td><td>0</td><td>\( - 1 \)</td></tr></table>
We obtain an irreducible representation with character \( \theta \) by having \( \mathrm{G} \) permute the coordinates of elements of \( {\mathbf{C}}^{3} \) satisfying the equation \( x + y \) \( + z = 0 \) (cf. ex. 2.6c)).
## 2.6 Canonical decomposition of a representation
Let \( \rho : \mathrm{G} \rightarrow \mathrm{{GL}}\left( \mathrm{V}\right) \) be a linear representation of \( \mathrm{G} \) . We are going to define a direct sum decomposition of \( \mathrm{V} \) which is "coarser" than the decomposition into irreducible representations, but which has the advantage of being unique. It is obtained as follows:
Let \( {\chi }_{1},\ldots ,{\chi }_{h} \) be the distinct characters of the irreducible representations \( {\mathrm{W}}_{1},\ldots ,{\mathrm{W}}_{h} \) of \( \mathrm{G} \) and \( {n}_{1},\ldots ,{n}_{h} \) their degrees. Let \( \mathrm{V} = {\mathrm{U}}_{1} \oplus \cdots \) \( \oplus {\mathrm{U}}_{m} \) be a decomposition of \( \mathrm{V} \) into a direct sum of irreducible representations. Fo
|
Corollary 1. Every irreducible representation \( {\mathrm{W}}_{i} \) is contained in the regular representation with multiplicity equal to its degree \( {n}_{i} \) .
|
According to th. 4, this number is equal to \( \left\langle {{r}_{\mathrm{G}},{\chi }_{i}}\right\rangle \), and we have
\[
\left\langle {{r}_{\mathrm{G}},{\chi }_{i}}\right\rangle = \frac{1}{g}\mathop{\sum }\limits_{{s \in \mathrm{G}}}{r}_{\mathrm{G}}\left( {s}^{-1}\right) {\chi }_{i}\left( s\right) = \frac{1}{g}g \cdot {\chi }_{i}\left( 1\right) = {\chi }_{i}\left( 1\right) = {n}_{i}.
\]
|
Theorem 1.8 Let \( T \) be a compact operator from \( E \) to \( E \) .
1. If \( E \) is infinite-dimensional,0 is a spectral value of \( T \) .
2. Every nonzero spectral value of \( T \) is an eigenvalue of \( T \) and has a finite-dimensional associated eigenspace.
3. The spectrum of \( T \) is countable. If it is infinite, its nonzero elements can be arranged in a sequence \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) such that, for all \( n \in \mathbb{N} \) ,
\[
\left| {\lambda }_{n + 1}\right| \leq \left| {\lambda }_{n}\right| \;\text{ and }\;\mathop{\lim }\limits_{{n \rightarrow + \infty }}{\lambda }_{n} = 0.
\]
Proof
1. Suppose that 0 is not a spectral value of \( T \) . Then \( I = T{T}^{-1} \) is a compact operator by Proposition 1.2. By the Riesz Theorem (page 49), this implies that \( E \) is finite-dimensional.
2. Take \( \lambda \in {\mathbb{K}}^{ * } \) . Then \( \lambda \) is an eigenvalue of \( T \) if and only if \( I - T/\lambda \) is not injective, and \( \ker \left( {{\lambda I} - T}\right) = \ker \left( {I - T/\lambda }\right) \) . On the other hand, \( \lambda \) is a spectral value of \( T \) if and only if \( I - T/\lambda \) is not invertible in \( L\left( E\right) \) . Thus it suffices to apply Proposition 1.6 to prove assertion 2.
3. For assertion 3, it is enough to show that, for every \( \varepsilon > 0 \), there is only a finite number (perhaps 0) of spectral values \( \lambda \) of \( T \) such that \( \left| \lambda \right| \geq \varepsilon \) . Suppose, on the contrary, that, for a certain \( \varepsilon > 0 \), there exists a sequence \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) of pairwise distinct spectral values of \( T \) such that \( \left| {\lambda }_{n}\right| \geq \varepsilon \) for every \( n \in \mathbb{N} \) . By part 2, all the \( {\lambda }_{n} \) are eigenvalues of \( T \) . Thus there exists a sequence \( \left( {e}_{n}\right) \) of elements of \( E \) of norm 1 such that \( T{e}_{n} = {\lambda }_{n}{e}_{n} \) for every \( n \in \mathbb{N} \) . Since the eigenvalues \( {\lambda }_{n} \) are pairwise distinct, it is easy to see (and it is a classical result) that the family \( {\left\{ {e}_{n}\right\} }_{n \in \mathbb{N}} \) is linearly independent. For each \( n \in \mathbb{N} \), let \( {E}_{n} \) be the span of
the \( n + 1 \) first vectors \( {e}_{0},\ldots ,{e}_{n} \) . The sequence \( {\left( {E}_{n}\right) }_{n \in \mathbb{N}} \) is then a strictly increasing sequence of finite-dimensional spaces. By Lemma 1.7, there exists a sequence \( {\left( {u}_{n}\right) }_{n \in \mathbb{N}} \) of vectors of norm 1 such that, for every integer \( n \in \mathbb{N} \) ,
\[
{u}_{n} \in {E}_{n + 1}\;\text{ and }\;d\left( {{u}_{n},{E}_{n}}\right) \geq \frac{1}{2}
\]
(in fact, since \( {E}_{n} \) has finite dimension, we could replace \( \frac{1}{2} \) by 1 here). Define \( {v}_{n} = {\lambda }_{n + 1}^{-1}{u}_{n} \) . The sequence \( \left( {v}_{n}\right) \) is bounded by \( 1/\varepsilon \) . Moreover, if \( n > m \) ,
\[
T{v}_{n} - T{v}_{m} = {u}_{n} - {v}_{n, m}\;\text{ with }\;{v}_{n, m} = T{v}_{m} + \frac{1}{{\lambda }_{n + 1}}\left( {{\lambda }_{n + 1}I - T}\right) {u}_{n}.
\]
But \( T{v}_{m} \in {E}_{m + 1} \subset {E}_{n} \) and \( \left( {{\lambda }_{n + 1}I - T}\right) \left( {E}_{n + 1}\right) \subset {E}_{n} \) . Thus \( {v}_{n, m} \in {E}_{n} \) and \( \begin{Vmatrix}{T{v}_{n} - T{v}_{m}}\end{Vmatrix} \geq \frac{1}{2} \), contradicting the compactness of \( T \) (the sequence \( {\left( {v}_{n}\right) }_{n \in \mathbb{N}} \) is bounded and its image under \( T \) has no Cauchy subsequence, hence no convergent subsequence).
Example. We now discuss a compact operator whose spectrum is countably infinite, and we determine this spectrum explicitly. Consider the operator \( T \) on the space \( C\left( \left\lbrack {0,1}\right\rbrack \right) \) (with the uniform norm) defined by
\[
{Tf}\left( x\right) = {\int }_{0}^{1 - x}f\left( t\right) {dt}\;\text{ for all }f \in C\left( \left\lbrack {0,1}\right\rbrack \right) .
\]
We know from Example 3 on page 214 that \( T \) is a compact operator. By Theorem 1.8, zero is a spectral value of \( T \), but clearly it is not an eigenvalue. To determine the spectrum explicitly, it is enough to find the eigenvalues. Let \( \lambda \) be an eigenvalue of \( T \) and let \( g \in C\left( \left\lbrack {0,1}\right\rbrack \right) \) be a corresponding nonzero eigenvector, so that
\[
{\lambda g}\left( x\right) = {\int }_{0}^{1 - x}g\left( t\right) {dt}\;\text{ for all }x \in \left\lbrack {0,1}\right\rbrack .
\]
Since \( \lambda \) is nonzero, \( g \) is necessarily of class \( {C}^{1} \) in \( \left\lbrack {0,1}\right\rbrack \) ; moreover \( g\left( 1\right) = 0 \) and
\[
\lambda {g}^{\prime }\left( x\right) = - g\left( {1 - x}\right) \;\text{ for all }x \in \left\lbrack {0,1}\right\rbrack .
\]
It follows that \( g \) is of class \( {C}^{2} \) in \( \left\lbrack {0,1}\right\rbrack \) and that
\[
g \neq 0,\;g\left( 1\right) = 0,\;{g}^{\prime }\left( 0\right) = 0,\;\lambda {g}^{\prime }\left( 1\right) = - g\left( 0\right) ,
\]
\( \left( *\right) \)
\[
\lambda {g}^{\prime \prime }\left( x\right) = - g\left( x\right) /\lambda \;\text{ for all }x \in \left\lbrack {0,1}\right\rbrack .
\]
\( \left( {* * }\right) \)
The solutions of the differential equation \( \left( {* * }\right) \) satisfying \( {g}^{\prime }\left( 0\right) = 0 \) are the functions \( g\left( x\right) = A\cos \left( {x/\lambda }\right) \) . In order for such a function to satisfy conditions \( \left( *\right) \), it is necessary that \( \cos \left( {1/\lambda }\right) = 0 \) and \( \sin \left( {1/\lambda }\right) = 1 \), which is to say \( 1/\lambda = \pi /2 + {2k\pi } \), with \( k \in \mathbb{Z} \), or yet
\[
\lambda = \frac{1}{\pi /2 + {2k\pi }},\;\text{ with }k \in \mathbb{Z}.
\]
Conversely, if \( \lambda = 1/\left( {\pi /2 + {2k\pi }}\right) \) with \( k \in \mathbb{Z} \), one easily checks that the function \( g \) defined by \( g\left( x\right) = \cos \left( {x/\lambda }\right) \) is an eigenvector of \( T \) associated with \( \lambda \) . Thus
\[
\sigma \left( T\right) = \{ 0\} \cup \left\{ {\frac{1}{\pi /2 + {2k\pi }} : k \in \mathbb{Z}}\right\} .
\]
We also see that all the eigenspaces of \( T \) have dimension 1 and that the spectral radius of \( T \) is \( 2/\pi \) .
## Exercises
1. Let \( E \) be an infinite-dimensional Banach space and \( F \) any normed vector space. Let \( T \) be an operator from \( E \) to \( F \) for which there exists a constant \( \alpha > 0 \) such that \( \parallel {Tx}\parallel \geq \alpha \parallel x\parallel \) for every \( x \in E \) . Show that \( T \) is not compact.
2. Let \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) be a sequence of complex numbers and let \( T \) be the operator on \( {\ell }^{p} \) (where \( p \in \lbrack 1, + \infty ) \) ) defined by
\[
{Tf}\left( n\right) = {\lambda }_{n}f\left( n\right) \;\text{ for all }f \in {\ell }^{p}\text{ and }n \in \mathbb{N}.
\]
We know from Exercise 4 on page 195 that \( T \) is continuous if and only if the sequence \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) is bounded.
a. Show that \( T \) is compact if and only if \( \mathop{\lim }\limits_{{n \rightarrow + \infty }}{\lambda }_{n} = 0 \) .
Hint. You might use Exercise 10 on page 183, for example.
b. Suppose \( p = 2 \) . Show that \( T \) is a Hilbert-Schmidt operator if and only if
\[
\mathop{\sum }\limits_{{n \in \mathbb{N}}}{\left| {\lambda }_{n}\right| }^{2} < + \infty
\]
c. Let \( S \) be the right shift in \( {\ell }^{p} \), where \( p \in \lbrack 1, + \infty ) \) (see Exercise 6e on page 196). Is \( S \) a compact operator?
d. Suppose that the sequence \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) tends to 0 . Determine the eigenvalues and the spectral values of \( {TS} \) .
3. Let \( X \) be a compact metric space and suppose \( \varphi \in C\left( X\right) \) . Show that the operator \( T \) on \( C\left( X\right) \) defined by \( {Tf} = {\varphi f} \) is compact if and only if \( \varphi \) vanishes on every cluster point of \( X \) .
Hint. Suppose that \( T \) is compact and that \( \left| {\varphi \left( x\right) }\right| > 0 \) at a point \( x \in X \) . Then there exists a closed neighborhood \( Y \) of \( x \) on which \( \left| \varphi \right| > 0 \) . Show that the restriction of \( T \) to \( C\left( Y\right) \) is an invertible compact operator in \( L\left( {C\left( Y\right) }\right) \) (to show compactness you will probably need Tietze’s Extension Theorem, Exercise 7a on page 40). Deduce that \( Y \) is finite. For the converse, use Ascoli's Theorem, page 44.
4. Let \( P \) be a polynomial not vanishing at 0 and let \( T \) be a linear operator on an infinite-dimensional normed space \( E \) . Assume \( P\left( T\right) = 0 \) . Show that \( T \) is not compact.
5. Let \( E \) be a Hilbert space and suppose \( T \in L\left( E\right) \) . Show that \( T \) is a compact operator if and only if \( {T}^{ * } \) is one.
Hint. Let \( \left( {x}_{n}\right) \) be a bounded sequence in \( E \) . Put \( M = \mathop{\sup }\limits_{n}\begin{Vmatrix}{x}_{n}\end{Vmatrix} \) and define \( {y}_{n} = {T}^{ * }{x}_{n} \) for each integer \( n \) . Show that, for every \( n, m \in \mathbb{N} \) ,
\[
{\begin{Vmatrix}{y}_{n} - {y}_{m}\end{Vmatrix}}^{2} \leq {2M}\begin{Vmatrix}{T{y}_{n} - T{y}_{m}}\end{Vmatrix}.
\]
Deduce that \( {T}^{ * } \) is compact.
6. a. Let \( T \) be a continuous operator on a Hilbert space \( E \) . Show that \( T \) is compact if and only if the image under \( T \) of every sequence in \( E \) that converges weakly to 0 is a sequence that converges (strongly) to 0 .
Hint. For the "if" part, use Exercise 12 on page 121 and Proposition 3.8 on page 116. For the converse, use Theorem 3.7 on page 115.
b. Show that this result remains true if \( E = {L}^{p}\left( m\right) \), where \( m \) is a
|
Theorem 1.8 Let \( T \) be a compact operator from \( E \) to \( E \) .
1. If \( E \) is infinite-dimensional, 0 is a spectral value of \( T \) .
2. Every nonzero spectral value of \( T \) is an eigenvalue of \( T \) and has a finite-dimensional associated eigenspace.
3. The spectrum of \( T \) is countable. If it is infinite, its nonzero elements can be arranged in a sequence \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) such that, for all \( n \in \mathbb{N} \) ,
\[
\left| {\lambda }_{n + 1}\right| \leq \left| {\lambda }_{n}\right| \;\text{ and }\;\mathop{\lim }\limits_{{n \rightarrow + \infty }}{\lambda }_{n} = 0.
\]
|
1. Suppose that 0 is not a spectral value of \( T \) . Then \( I = T{T}^{-1} \) is a compact operator by Proposition 1.2. By the Riesz Theorem (page 49), this implies that \( E \) is finite-dimensional. This contradicts the assumption that \( E \) is infinite-dimensional, hence 0 must be a spectral value of \( T \).
2. Take \( \lambda \in {\mathbb{K}}^{ * } \) . Then \( \lambda \) is an eigenvalue of \( T \) if and only if \( I - T/\lambda \) is not injective, and \( \ker \left( {{\lambda I} - T}\right) = \ker \left( {I - T/\lambda }\right) \) . On the other hand, \( \lambda \) is a spectral value of \( T \) if and only if \( I - T/\lambda \) is not invertible in \( L\left( E\right) \) . Thus it suffices to apply Proposition 1.6 to prove assertion 2.
3. For assertion 3, it is enough to show that, for every \( \varepsilon > 0 \), there is only a finite number (perhaps 0) of spectral values \( \lambda \) of \( T \) such that \( \left| \lambda \right| \geq \varepsilon \) . Suppose, on the contrary, that, for a certain \( \varepsilon > 0 \), there exists a sequence \( {\left( {\lambda }_{n}\right) }_{n \in \mathbb{N}} \) of pairwise distinct spectral values of \( T \) such that \( \left| {\lambda }_{n}\right| \geq \varepsilon \) for every \( n \in \mathbb{N} \) . By part 2, all the \( {\lambda }_{n} \) are eigenvalues of \( T \) . Thus there exists a sequence \( \left( {e}_{n}\right) \) of elements of \( E \) of norm 1 such that \( T{e}_{n} = {\lambda }_{n}{e}_{n} \) for every \( n \in \mathbb{N} \) . Since the eigenvalues \( {\lambda }_{n} \) are pairwise distinct, it is easy to see (and it is a classical result) that the family \( {\left\{ {e}_{n}\right\} }_{n \in \mathbb{N}} \) is linearly independent. For each \( n \in \mathbb{N} \), let \( {E}_{n} \) be the span of the \( n + 1 \) first vectors \( {e}_{0},\ldots ,{e}_{n}
|
Proposition 9.19. If \( \Gamma \leq {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) is a lattice, the hyperbolic measure defined by the volume form \( \mathrm{d}m = \frac{1}{{y}^{2}}\mathrm{\;d}x\mathrm{\;d}y\mathrm{\;d}\theta \) in Lemma 9.16 induces a \( {fi} \) - nite \( {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) -invariant measure \( {m}_{X} \) on \( X = \Gamma \smallsetminus {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) . In fact if
\[
\pi : {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \rightarrow X
\]
is the canonical quotient map \( \pi \left( g\right) = {\Gamma g} \) for \( g \in {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) and \( F \) is a finite volume fundamental domain, then
\[
{m}_{X}\left( B\right) = m\left( {F \cap {\pi }^{-1}B}\right)
\]
for \( B \subseteq X \) measurable defines the \( {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) -invariant measure on \( X \) .
## 9.4.3 Lattices in Closed Linear Groups
Rather than prove Proposition 9.19 in isolation, we give some general comments which will lead to a natural generalization (Proposition 9.20). Notice that the measure \( m \) on \( {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) is invariant under the left action of \( {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) on \( {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) by Lemma 9.16, that is \( m \) is a Haar measure on \( {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) .
Recall from p. 248 that the left Haar measure \( {m}_{G} \) of a locally compact metric group \( G \) is unique up to scalar multiples (see Sect. C.2), and note that right multiplication by elements of \( G \) sends \( {m}_{G} \) to another Haar measure, since
\[
{\left( {R}_{g}\right) }_{ * }{m}_{G}\left( {hB}\right) = {m}_{G}\left( {hBg}\right) = {m}_{G}\left( {Bg}\right) = {\left( {R}_{g}\right) }_{ * }{m}_{G}\left( B\right)
\]
for all measurable sets \( B \subseteq G \) . Since Haar measures are unique up to scalars, this defines a continuous modular homomorphism \( {\;\operatorname{mod}\; : }G \rightarrow {\mathbb{R}}_{ > 0} \) into the multiplicative group of positive reals (such a homomorphism is also called a character) by
\[
{\left( {R}_{g}\right) }_{ * }\left( {m}_{G}\right) = {\;\operatorname{mod}\;\left( g\right) }{m}_{G}
\]
A group \( G \) is unimodular if \( {\;\operatorname{mod}\;\left( G\right) } = \{ 1\} \), that is if \( {m}_{G} \) is both a left and a right Haar measure on \( G \) . Part of the proof of the following more general statement consists of showing that the group \( G \) appearing is unimodular.
Proposition 9.20. Let \( G \) be a closed linear group, and let \( \Gamma \leq G \) be a lattice in the sense that \( \Gamma \) is discrete and that there is a fundamental domain \( F \) for \( X = \Gamma \smallsetminus G \) with finite left Haar measure. Then any fundamental domain has the same measure as \( F, G \) is unimodular, and the Haar measure \( {m}_{G} \) induces a finite measure \( {m}_{X} \) on \( X \) via
\[
{m}_{X}\left( B\right) = {m}_{G}\left( {{\pi }^{-1}\left( B\right) \cap F}\right)
\]
for all measurable \( B \subseteq X \) . Moreover, the right \( G \) -action \( {R}_{g}\left( x\right) = x{g}^{-1} \) for \( x \in X \) and \( g \in G \) leaves the measure \( {m}_{X} \) invariant.
Despite the fact that in general \( X \) is not a group, we will nonetheless refer to the measure \( {m}_{X} \) on \( X \) as the Haar measure on \( X \) .
Proof of Proposition 9.20. We first show that any two fundamental domains \( F,{F}^{\prime } \subseteq G \) for \( \Gamma \smallsetminus G \) have the same volume. In fact we claim that if \( B,{B}^{\prime } \subseteq G \) are measurable sets with the property that \( {\left. \pi \right| }_{B} \) and \( {\left. \pi \right| }_{{B}^{\prime }} \) are injective, and \( \pi \left( B\right) = \pi \left( {B}^{\prime }\right) \), then \( {m}_{G}\left( B\right) = {m}_{G}\left( {B}^{\prime }\right) \) .
By assumption, for every \( g \in B \) there is a unique \( \gamma \in \Gamma \) with \( g \in \gamma {B}^{\prime } \), so
\[
B = \mathop{\bigsqcup }\limits_{{\gamma \in \Gamma }}B \cap \gamma {B}^{\prime }
\]
and similarly
\[
{B}^{\prime } = \mathop{\bigsqcup }\limits_{{{\gamma }^{\prime } \in \Gamma }}{B}^{\prime } \cap {\gamma }^{\prime }B
\]
However, these two decompositions are equivalent in the sense that one can be used to derive the other: given \( \gamma \in \Gamma \) and a chosen set \( B \cap \gamma {B}^{\prime } \) we get
\[
{\gamma }^{-1}\left( {B \cap \gamma {B}^{\prime }}\right) = {B}^{\prime } \cap {\gamma }^{-1}B.
\]
For the left Haar measure \( {m}_{G} \) we therefore have
\[
{m}_{G}\left( B\right) = \mathop{\sum }\limits_{{\gamma \in \Gamma }}{m}_{G}\left( {B \cap \gamma {B}^{\prime }}\right) = \mathop{\sum }\limits_{{\gamma \in \Gamma }}{m}_{G}\left( {{B}^{\prime } \cap {\gamma }^{-1}B}\right) = {m}_{G}\left( {B}^{\prime }\right) .
\]
This proves the claim, and in particular \( {m}_{G}\left( F\right) = {m}_{G}\left( {F}^{\prime }\right) \) for any two fundamental domains \( F \) and \( {F}^{\prime } \) .
Now notice that for any \( g \in G \) the set \( {F}^{\prime } = {Fg} \) is another fundamental domain whose measure satisfies
\[
{m}_{G}\left( F\right) = {m}_{G}\left( {F}^{\prime }\right) = \operatorname{mod}\left( g\right) {m}_{G}\left( F\right) .
\]
Since our assumption is that \( {m}_{G}\left( F\right) < \infty \), and \( {m}_{G}\left( F\right) > 0 \) since \( \Gamma \) is discrete, we deduce that \( {\;\operatorname{mod}\;\left( G\right) } = \{ 1\} \) and that \( G \) is unimodular.
Now let \( B \subseteq X \) be a measurable set. We define \( {m}_{X}\left( B\right) = {m}_{G}\left( {{\pi }^{-1}\left( B\right) \cap F}\right) \) and note that this definition is independent of the choice of fundamental domain \( F \) by the claim above. Now write \( C = {\pi }^{-1}\left( B\right) \cap F \) and note that
\[
{Cg} = {\pi }^{-1}\left( {Bg}\right) \cap {F}^{\prime } \subseteq {F}^{\prime } = {Fg}.
\]
Then by the above \( {m}_{G}\left( C\right) = {m}_{G}\left( {Cg}\right) \) and
\[
{m}_{X}\left( {Bg}\right) = {m}_{G}\left( {Cg}\right) = {m}_{G}\left( C\right) = {m}_{X}\left( B\right)
\]
so \( {m}_{X}\left( B\right) = {m}_{X}\left( {{R}_{g}^{-1}\left( B\right) }\right) \) as claimed.
## Exercises for Sect. 9.4
Exercise 9.4.1. Let \( \Gamma \subseteq {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) be a uniform lattice and fix a point \( x \) in \( X = \Gamma \smallsetminus {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) . Show that \( x{U}^{ - } \) consists precisely of all points \( y \in X \) for which
\[
\mathrm{d}\left( {{R}_{{a}_{t}}\left( x\right) ,{R}_{{a}_{t}}\left( y\right) }\right) \rightarrow 0
\]
as \( t \rightarrow \infty \) .
Exercise 9.4.2. Show that the closed linear group
\[
T = \left\{ {\left. \left( \begin{matrix} {\mathrm{e}}^{t/2} & s \\ & {\mathrm{e}}^{-t/2} \end{matrix}\right) \right| \;s, t \in \mathbb{R}}\right\}
\]
does not contain a lattice. That is, \( T \) does not contain a discrete subgroup with a fundamental domain of finite left Haar measure.
Exercise 9.4.3. Show that \( \left\lbrack {{\mathrm{{SL}}}_{d}\left( \mathbb{R}\right) ,{\mathrm{{SL}}}_{d}\left( \mathbb{R}\right) }\right\rbrack = {\mathrm{{SL}}}_{d}\left( \mathbb{R}\right) \) where
\[
\left\lbrack {g, h}\right\rbrack = {g}^{-1}{h}^{-1}{gh}
\]
for \( g, h \in G \) denotes the commutator of \( g \) and \( h \) in a group \( G \), and \( \left\lbrack {G, G}\right\rbrack \) denotes the commutator subgroup generated by all the commutators in \( G \) . Deduce that \( {\mathrm{{SL}}}_{d}\left( \mathbb{R}\right) \) is unimodular for all \( d \geq 2 \) .
Exercise 9.4.4. Prove that \( {\operatorname{PSL}}_{2}\left( \mathbb{Z}\right) \) is a free product of an element of order 2 and an element of order 3 .
Exercise 9.4.5. Extend the arguments of Proposition 9.18 to show that the subgroup \( {\operatorname{PSL}}_{2}\left( \mathbb{Z}\right) \) is a non-uniform lattice in \( {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) .
## 9.5 Hopf's Argument for Ergodicity of the Geodesic Flow
The fundamental result about the geodesic flow on a quotient by a lattice, proved in greater generality than we need by Hopf [156] (see also his later paper [157]), is that it is ergodic.
Theorem 9.21. Let \( \Gamma \leq {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) be a lattice. Then any non-trivial element of the geodesic flow (that is, the map \( {R}_{{a}_{t}} \) for some \( t \neq 0 \) ) is an ergodic transformation on \( X = \Gamma \smallsetminus {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) with respect to \( {m}_{X} \) .
In the proof we will use the following basic idea: If a uniformly continuous function \( f : X \rightarrow \mathbb{R} \) is invariant under \( {R}_{{a}_{t}} \), then it is also invariant under \( {U}^{ - } \) and \( {U}^{ + } \), and is therefore constant.
To see this, we will consider the points \( x, y = x{u}^{ - } \in X \) and will show that \( {R}_{{a}_{t}}^{n}\left( y\right) = {R}_{{a}_{t}}^{n}\left( x\right) {a}_{t}^{n}{u}^{ - }{a}_{t}^{-n} \) and \( {R}_{{a}_{t}}^{n}\left( x\right) \) are very close together for large enough \( n \), and so by invariance and uniform continuity of \( f \) ,
\[
f\left( x\right) = f\left( {{R}_{{a}_{t}}^{n}\left( x\right) }\right) \approx f\left( {{R}_{{a}_{t}}^{n}\left( y\right) }\right) = f\left( y\right)
\]
(9.17)
are close together for large \( n \), which shows that \( f\left( x\right) = f\left( y\right) \) as claimed. Essentially the same idea will be used in the proof for a measurable invariant function, which is what is needed to prove ergodicity. In this outline, we could use a large \( n \), but when working with a measurable function (as we must to establish ergodicity) we will need to be more careful in the choice of the variable \( n \) .
For the proof we will make use of Proposition 8.6, which gives a kind of "ergo
|
Proposition 9.19. If \( \Gamma \leq {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) is a lattice, the hyperbolic measure defined by the volume form \( \mathrm{d}m = \frac{1}{{y}^{2}}\mathrm{\;d}x\mathrm{\;d}y\mathrm{\;d}\theta \) in Lemma 9.16 induces a \( {fi} \) - nite \( {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) -invariant measure \( {m}_{X} \) on \( X = \Gamma \smallsetminus {\mathrm{{PSL}}}_{2}\left( \mathbb{R}\right) \) . In fact if
\[
\pi : {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \rightarrow X
\]
is the canonical quotient map \( \pi \left( g\right) = {\Gamma g} \) for \( g \in {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) and \( F \) is a finite volume fundamental domain, then
\[
{m}_{X}\left( B\right) = m\left( {F \cap {\pi }^{-1}B}\right)
\]
for \( B \subseteq X \) measurable defines the \( {\operatorname{PSL}}_{2}\left( \mathbb{R}\right) \) -invariant measure on \( X \) .
|
null
|
Corollary 14.23 Suppose that \( S \) is a lower semibounded self-adjoint extension of the densely defined lower semibounded symmetric operator \( T \) on \( \mathcal{H} \) . Let \( \lambda \in \mathbb{R} \) , \( \lambda < {m}_{T} \), and \( \lambda \leq {m}_{S} \) . Then \( S \) is equal to the Friedrichs extension \( {T}_{F} \) of \( T \) if and only if \( \mathcal{D}\left\lbrack S\right\rbrack \cap \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) = \{ 0\} \) .
Proof Since then \( \lambda \in \rho \left( {T}_{F}\right) \), Example 14.6, with \( A = {T}_{F},\mu = \lambda \), yields a boundary triplet \( \left( {\mathcal{K},{\Gamma }_{0},{\Gamma }_{1}}\right) \) for \( {T}^{ * } \) such that \( {T}_{0} = {T}_{F} \) . By Propositions 14.7(v) and 14.21, there is a self-adjoint relation \( \mathcal{B} \) on \( \mathcal{K} \) such that \( S = {T}_{\mathcal{B}} \) and \( \mathcal{B} - M\left( \lambda \right) \geq 0 \) . Thus, all assumptions of Theorem 14.22 are fulfilled. Recall that \( \mathcal{K} = \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \) . Therefore, by (14.56), \( \mathcal{D}\left\lbrack {T}_{\mathcal{B}}\right\rbrack \cap \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) = \{ 0\} \) if and only if \( \gamma \left( \lambda \right) \mathcal{D}\left\lbrack {\mathcal{B} - M\left( \lambda \right) }\right\rbrack = \{ 0\} \) , that is, \( \mathcal{D}\left\lbrack {\mathcal{B} - M\left( \lambda \right) }\right\rbrack = \{ 0\} \) . By (14.56) and (14.57) the latter is equivalent to \( {\mathfrak{t}}_{{T}_{\mathcal{B}}} = {\mathfrak{t}}_{{T}_{F}} \) and so to \( {T}_{\mathcal{B}} = {T}_{F} \) . (Note that we even have \( M\left( \lambda \right) = 0 \) and \( \gamma \left( \lambda \right) = {I}_{\mathcal{H}} \upharpoonright \mathcal{K} \) by Example 14.12.)
In Theorem 14.22 we assumed that the self-adjoint operator \( {T}_{0} \) from Corollary 14.8 is equal to the Friedrichs extension \( {T}_{F} \) of \( T \) . For the boundary triplet from Example 14.6, we now characterize this property in terms of the Weyl function.
Example 14.13 (Example 14.12 continued) Suppose that the self-adjoint operator \( A \) in Example 14.6 is lower semibounded and \( \mu < {m}_{A} \) . Recall that \( A \) is the operator \( {T}_{0} \) and \( \mathcal{K} = \mathcal{N}\left( {{T}^{ * } - {\mu I}}\right) \) . Since \( T \subseteq {T}_{0} = A \), the symmetric operator \( T \) is then lower semibounded, and \( \mu < {m}_{A} \leq {m}_{T} \) .
Statement The operator \( A = {T}_{0} \) is equal to the Friedrichs extension \( {T}_{F} \) of \( T \) if and only if for all \( u \in \mathcal{K}, u \neq 0 \), we have
\[
\mathop{\lim }\limits_{{t \rightarrow - \infty }}\langle M\left( t\right) u, u\rangle = - \infty
\]
(14.61)
Proof Assume without loss of generality that \( \mu = 0 \) . Then \( A \geq 0 \) . By Corollary 14.23, applied with \( S = A \) and \( \lambda = 0 \), it suffices to show that any nonzero vector \( u \in \mathcal{K} = \mathcal{N}\left( {T}^{ * }\right) \) is not in the form domain \( \mathcal{D}\left\lbrack A\right\rbrack \) if and only if (14.61) holds.
Let \( E \) denote the spectral measure of \( A \) . By formula (14.50), applied with \( \mu = 0 \) ,
\[
\langle M\left( t\right) u, u\rangle = {\int }_{0}^{\infty }{t\lambda }{\left( \lambda - t\right) }^{-1}d\langle E\left( \lambda \right) u, u\rangle .
\]
(14.62)
On the other hand, \( u \notin \mathcal{D}\left\lbrack A\right\rbrack = \mathcal{D}\left( {A}^{1/2}\right) \) if and only if
\[
{\int }_{0}^{\infty }{\lambda d}\langle E\left( \lambda \right) u, u\rangle = \infty
\]
(14.63)
Let \( \alpha > 0 \) . Suppose that \( t \leq - \alpha \) . For \( \lambda \in \left\lbrack {0,\alpha }\right\rbrack \), we have \( \lambda \leq - t \), so \( \lambda - t \leq - {2t} \) and \( 1 \leq - {2t}{\left( \lambda - t\right) }^{-1} \), hence, \( {2t\lambda }{\left( \lambda - t\right) }^{-1} \leq - \lambda \) . Thus,
\[
{\int }_{0}^{\alpha }{2t\lambda }{\left( \lambda - t\right) }^{-1}d\langle E\left( \lambda \right) u, u\rangle \leq - {\int }_{0}^{\alpha }{\lambda d}\langle E\left( \lambda \right) u, u\rangle \;\text{ for }t \leq - \alpha .
\]
(14.64)
If \( u \notin \mathcal{D}\left\lbrack A\right\rbrack \), then (14.63) holds, and hence by (14.64) and (14.62) we obtain (14.61).
Conversely, suppose that (14.61) is satisfied. Since \( - {t\lambda }{\left( \lambda - t\right) }^{-1} \leq \lambda \) for \( \lambda > 0 \) and \( t < 0 \), it follows from inequalities (14.62) and (14.61) that (14.63) holds. Therefore, \( u \notin \mathcal{D}\left\lbrack A\right\rbrack \) .
## 14.8 Positive Self-adjoint Extensions
Throughout this section we suppose that \( T \) is a densely defined symmetric operator on \( \mathcal{H} \) with positive lower bound \( {m}_{T} > 0 \), that is,
\[
\langle {Tx}, x\rangle \geq {m}_{T}\parallel x{\parallel }^{2},\;x \in \mathcal{D}\left( T\right) ,\text{ where }{m}_{T} > 0.
\]
(14.65)
Our aim is to apply the preceding results (especially Theorem 14.22) to investigate the set of all positive self-adjoint extensions of \( T \) .
Since \( 0 < {m}_{T} = {m}_{{T}_{F}} \), we have \( 0 \in \rho \left( {T}_{F}\right) \) . Hence, Theorem 14.12 applies with \( \mu = 0 \) and \( A = {T}_{F} \) . By Theorem 14.12 the self-adjoint extensions of \( T \) on \( \mathcal{H} \) are precisely the operators \( {T}_{B} \) defined therein with \( B \in \mathcal{S}\left( {\mathcal{N}\left( {T}^{ * }\right) }\right) \) . Recall that
\[
\mathcal{D}\left( {T}_{B}\right)
\]
\[
= \left\{ {x + {\left( {T}_{F}\right) }^{-1}\left( {{Bu} + v}\right) + u : x \in \mathcal{D}\left( \bar{T}\right), u \in \mathcal{D}\left( B\right), v \in \mathcal{N}\left( {T}^{ * }\right) \cap \mathcal{D}{\left( B\right) }^{ \bot }}\right\} ,
\]
\[
{T}_{B}\left( {x + {\left( {T}_{F}\right) }^{-1}\left( {{Bu} + v}\right) + u}\right) = \bar{T}x + {Bu} + v.
\]
Because of the inverse of \( {T}_{F} \), it might be difficult to describe the domain and the action of the operator \( {T}_{B} \) explicitly. By contrast, if \( {T}_{B} \) is positive, the following theorem shows that there is an elegant and explicit formula for the associated form.
Let \( \mathcal{S}{\left( \mathcal{N}\left( {T}^{ * }\right) \right) }_{ + } \) denote the set of positive operators in \( \mathcal{S}\left( {\mathcal{N}\left( {T}^{ * }\right) }\right) \) .
## Theorem 14.24
(i) For \( B \in \mathcal{S}\left( {\mathcal{N}\left( {T}^{ * }\right) }\right) \), we have \( {T}_{B} \geq 0 \) if and only if \( B \geq 0 \) .
In this case the greatest lower bounds \( {m}_{B} \) and \( {m}_{{T}_{B}} \) satisfy the inequalities
\[
{m}_{T}{m}_{B}{\left( {m}_{T} + {m}_{B}\right) }^{-1} \leq {m}_{{T}_{B}} \leq {m}_{B}.
\]
(ii) If \( B \in \mathcal{S}{\left( \mathcal{N}\left( {T}^{ * }\right) \right) }_{ + } \), then \( \mathcal{D}\left\lbrack {T}_{B}\right\rbrack = \mathcal{D}\left\lbrack {T}_{F}\right\rbrack \dot{ + }\mathcal{D}\left\lbrack B\right\rbrack \), and
\[
{T}_{B}\left\lbrack {y + u,{y}^{\prime } + {u}^{\prime }}\right\rbrack = {T}_{F}\left\lbrack {y,{y}^{\prime }}\right\rbrack + B\left\lbrack {u,{u}^{\prime }}\right\rbrack \;\text{ for }y,{y}^{\prime } \in \mathcal{D}\left\lbrack {T}_{F}\right\rbrack, u,{u}^{\prime } \in \mathcal{D}\left\lbrack B\right\rbrack .
\]
(iii)
If \( {B}_{1},{B}_{2} \in \mathcal{S}{\left( \mathcal{N}\left( {T}^{ * }\right) \right) }_{ + } \), then \( {B}_{1} \geq {B}_{2} \) is equivalent to \( {T}_{{B}_{1}} \geq {T}_{{B}_{2}} \) .
Proof First, suppose that \( B \geq 0 \) . Let \( f \in \mathcal{D}\left( {T}_{B}\right) \) . By the above formulas, \( f \) is of the form \( f = y + u \) with \( y = x + {T}_{F}^{-1}\left( {{Bu} + v}\right) \), where \( x \in \mathcal{D}\left( \bar{T}\right), u \in \mathcal{D}\left( B\right) \), and \( v \in \mathcal{N}\left( {T}^{ * }\right) \cap \mathcal{D}{\left( B\right) }^{ \bot } \), and we have \( {T}_{B}f = {T}_{F}y \), since \( \bar{T} \subseteq {T}_{F} \) and \( y \in \mathcal{D}\left( {T}_{F}\right) \) . Further, \( {m}_{T} = {m}_{{T}_{F}},{T}^{ * }u = 0 \), and \( \langle v, u\rangle = 0 \) . Using these facts, we compute
\[
\left\langle {{T}_{B}f, f}\right\rangle = \left\langle {{T}_{F}y, y + u}\right\rangle = \left\langle {{T}_{F}y, y}\right\rangle + \langle \bar{T}x + {Bu} + v, u\rangle = \left\langle {{T}_{F}y, y}\right\rangle + \langle {Bu}, u\rangle
\]
\[
\geq {m}_{T}{\begin{Vmatrix}y\end{Vmatrix}}^{2} + {m}_{B}{\begin{Vmatrix}u\end{Vmatrix}}^{2} \geq {m}_{T}{m}_{B}{\left( {m}_{T} + {m}_{B}\right) }^{-1}{\left( \begin{Vmatrix}y\end{Vmatrix} + \begin{Vmatrix}u\end{Vmatrix}\right) }^{2}
\]
\[
\geq {m}_{T}{m}_{B}{\left( {m}_{T} + {m}_{B}\right) }^{-1}\parallel y + u{\parallel }^{2} = {m}_{T}{m}_{B}{\left( {m}_{T} + {m}_{B}\right) }^{-1}\parallel f{\parallel }^{2},
\]
(14.66)
where the second inequality follows from the elementary inequality
\[
\alpha {a}^{2} + \beta {b}^{2} \geq {\alpha \beta }{\left( \alpha + \beta \right) }^{-1}{\left( a + b\right) }^{2}\;\text{ for }\alpha > 0,\beta \geq 0, a \geq 0, b \geq 0.
\]
Clearly,(14.66) implies that \( {T}_{B} \geq 0 \) and \( {m}_{{T}_{B}} \geq {m}_{T}{m}_{B}{\left( {m}_{T} + {m}_{B}\right) }^{-1} \) .
The other assertions of (i) and (ii) follow from Proposition 14.21 and Theorem 14.22 applied to the boundary triplet from our second standard Example 14.6, with \( A = {T}_{0} = {T}_{F},\mu = 0 \), by using that \( \gamma \left( 0\right) = {I}_{\mathcal{H}} \upharpoonright \mathcal{K} \) and \( M\left( 0\right) = 0 \) (see Example 14.12). Since \( {m}_{T} > 0 \) by assumption (14.65), we can set \( \lambda = 0 \) in both results. Recall that by (14.6) the form of a self-adjoint relation \( \mathcal{B} \) is the form of its operator part \( B \) . Since \( {T}_{B}\left\lbrack u\right\rbrack = B\left\lbrack u\right\rbrack \) for \( u \in \mathcal{D}\left\lbrack B\right\rbrack \) by (14.57), it is obvious that \( {m}_{B} \geq {m}_{{T}_{B}} \) .
(iii) is an immediate consequence of (ii).
|
Corollary 14.23 Suppose that \( S \) is a lower semibounded self-adjoint extension of the densely defined lower semibounded symmetric operator \( T \) on \( \mathcal{H} \). Let \( \lambda \in \mathbb{R} \), \( \lambda < {m}_{T} \), and \( \lambda \leq {m}_{S} \). Then \( S \) is equal to the Friedrichs extension \( {T}_{F} \) of \( T \) if and only if \( \mathcal{D}\left\lbrack S\right\rbrack \cap \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) = \{ 0\} \).
|
Since then \( \lambda \in \rho \left( {T}_{F}\right) \), Example 14.6, with \( A = {T}_{F},\mu = \lambda \), yields a boundary triplet \( \left( {\mathcal{K},{\Gamma }_{0},{\Gamma }_{1}}\right) \) for \( {T}^{ * } \) such that \( {T}_{0} = {T}_{F} \). By Propositions 14.7(v) and 14.21, there is a self-adjoint relation \( \mathcal{B} \) on \( \mathcal{K} \) such that \( S = {T}_{\mathcal{B}} \) and \( \mathcal{B} - M\left( \lambda \right) \geq 0 \). Thus, all assumptions of Theorem 14.22 are fulfilled. Recall that \( \mathcal{K} = \mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) \). Therefore, by (14.56), \( \mathcal{D}\left\lbrack {T}_{\mathcal{B}}\right\rbrack \cap \(\mathcal{N}\left( {{T}^{ * } - {\lambda I}}\right) = \{ 0\} \) if and only if \(\gamma \(\lambda\) \(\mathcal{D}\left\lbrack {\mathcal{B} - M\(\lambda\) }\right\rbrack = \{ 0\}\), that is, \(\mathcal{D}\left\lbrack {\mathcal{B} - M\(\lambda\) }\right\rbrack = \{ 0\}\). By (14.56) and (14.57) the latter is equivalent to \(\mathfrak{t}_{{T}_{\mathcal{B}}} = \(\mathfrak{t}_{{T}_{F}}\) and so to \(\({T}_{\mathcal{B}} = {T}_{F}\)\). (Note that we even have \(\(M\(\lambda\) = 0\) and \(\gamma \(\lambda\) = {I}_{\mathcal{H}} |_{\mathcal{K}}\) by Example 14.12.)
|
Exercise 1.9 (a) Let \( A \subseteq {GL}\left( {n,\mathbb{R}}\right) \) be the subgroup of diagonal matrices with positive elements on the diagonal and let \( N \subseteq {GL}\left( {n,\mathbb{R}}\right) \) be the subgroup of upper triangular matrices with 1's on the diagonal. Using Gram-Schmidt orthogonalization, show multiplication induces a diffeomorphism of \( O\left( n\right) \times A \times N \) onto \( {GL}\left( {n,\mathbb{R}}\right) \) . This is called the Iwasawa or \( {KAN} \) decomposition for \( {GL}\left( {n,\mathbb{R}}\right) \) . As topological spaces, show that \( {GL}\left( {n,\mathbb{R}}\right) \cong O\left( n\right) \times {\mathbb{R}}^{\frac{n\left( {n + 1}\right) }{2}} \) . Similarly, as topological spaces, show that \( {SL}\left( {n,\mathbb{R}}\right) \cong {SO}\left( n\right) \times {\mathbb{R}}^{\frac{\left( {n + 2}\right) \left( {n - 1}\right) }{2}}. \)
(b) Let \( A \subseteq {GL}\left( {n,\mathbb{C}}\right) \) be the subgroup of diagonal matrices with positive real elements on the diagonal and let \( N \subseteq {GL}\left( {n,\mathbb{C}}\right) \) be the subgroup of upper triangular matrices with 1's on the diagonal. Show that multiplication induces a diffeomorphism of \( U\left( n\right) \times A \times N \) onto \( {GL}\left( {n,\mathbb{C}}\right) \) . As topological spaces, show \( {GL}\left( {n,\mathbb{C}}\right) \cong \) \( U\left( n\right) \times {\mathbb{R}}^{{n}^{2}} \) . Similarly, as topological spaces, show that \( {SL}\left( {n,\mathbb{C}}\right) \cong {SU}\left( n\right) \times {\mathbb{R}}^{{n}^{2} - 1} \) .
Exercise 1.10 Let \( N \subseteq {GL}\left( {n,\mathbb{C}}\right) \) be the subgroup of upper triangular matrices with 1’s on the diagonal, let \( \bar{N} \subseteq {GL}\left( {n,\mathbb{C}}\right) \) be the subgroup of lower triangular matrices with 1’s on the diagonal, and let \( W \) be the subgroup of permutation matrices (i.e., matrices with a single one in each row and each column and zeros elsewhere). Use Gaussian elimination to show \( {GL}\left( {n,\mathbb{C}}\right) = { \coprod }_{w \in W}\bar{N}{wN} \) . This is called the Bruhat decomposition for \( {GL}\left( {n,\mathbb{C}}\right) \) .
Exercise 1.11 (a) Let \( P \subseteq {GL}\left( {n,\mathbb{R}}\right) \) be the set of positive definite symmetric matrices. Show that multiplication gives a bijection from \( P \times O\left( n\right) \) to \( {GL}\left( {n,\mathbb{R}}\right) \) .
(b) Let \( H \subseteq {GL}\left( {n,\mathbb{C}}\right) \) be the set of positive definite Hermitian matrices. Show that multiplication gives a bijection from \( H \times U\left( n\right) \) to \( {GL}\left( {n,\mathbb{C}}\right) \) .
Exercise 1.12 (a) Show that \( \widetilde{\vartheta } \) is given by the formula in Equation 1.13.
(b) Show \( \vartheta {r}_{j}{\vartheta }^{-1}z = J\bar{z} \) for \( z \in {\mathbb{C}}^{2n} \) .
(c) Show that \( \widetilde{\vartheta }\left( {X}^{ * }\right) = {\left( \widetilde{\vartheta }X\right) }^{ * } \) for \( X \in {M}_{n, n}\left( \mathbb{H}\right) \) .
Exercise 1.13 For \( v, u \in {\mathbb{H}}^{n} \), let \( \left( {v, u}\right) = \mathop{\sum }\limits_{{p = 1}}^{n}{v}_{p}\overline{{u}_{p}} \) .
(a) Show that \( \left( {{Xv}, u}\right) = \left( {v,{X}^{ * }u}\right) \) for \( X \in {M}_{n, n}\left( \mathbb{H}\right) \) .
(b) Show that \( {Sp}\left( n\right) = \left\{ {g \in {M}_{n}\left( \mathbb{H}\right) \mid \left( {{gv},{gu}}\right) = \left( {v, u}\right) \text{, all}v, u \in {\mathbb{H}}^{n}}\right\} \) .
## 1.2 Basic Topology
## 1.2.1 Connectedness
Recall that a topological space is connected if it is not the disjoint union of two nonempty open sets. A space is path connected if any two points can be joined by a continuous path. While in general these two notions are distinct, they are equivalent for manifolds. In fact, it is even possible to replace continuous paths with smooth paths.
The first theorem is a technical tool that will be used often.
Theorem 1.15. Let \( G \) be a connected Lie group and \( U \) a neighborhood of \( e \) . Then \( U \) generates \( G \), i.e., \( G = { \cup }_{n = 1}^{\infty }{U}^{n} \) where \( {U}^{n} \) consists of all \( n \) -fold products of elements of \( U \) .
Proof. We may assume \( U \) is open without loss of generality. Let \( V = U \cap {U}^{-1} \subseteq U \) where \( {U}^{-1} \) is the set of all inverses of elements in \( U \) . This is an open set since the inverse map is continuous. Let \( H = { \cup }_{n = 1}^{\infty }{V}^{n} \) . By construction, \( H \) is an open subgroup containing \( e \) . For \( g \in G \), write \( {gH} = \{ {gh} \mid h \in H\} \) . The set \( {gH} \) contains \( g \) and is open since left multiplication by \( {g}^{-1} \) is continuous. Thus \( G \) is the union of all the open sets \( {gH} \) . If we pick a representative \( {g}_{\alpha }H \) for each coset in \( G/H \), then \( G = { \coprod }_{\alpha }\left( {{g}_{\alpha }H}\right) \) . Hence the connectedness of \( G \) implies that \( G/H \) contains exactly one coset, i.e., \( {eH} = G \), which is sufficient to finish the proof.
We still lack general methods for determining when a Lie group \( G \) is connected. This shortcoming is remedied next.
Definition 1.16. If \( G \) is a Lie group, write \( {G}^{0} \) for the connected component of \( G \) containing \( e \) .
Lemma 1.17. Let \( G \) be a Lie group. The connected component \( {G}^{0} \) is a regular Lie subgroup of \( G \) . If \( {G}^{1} \) is any connected component of \( G \) with \( {g}_{1} \in {G}^{1} \), then \( {G}^{1} = \) \( {g}_{1}{G}^{0} \) .
Proof. We prove the second statement of the lemma first. Since left multiplication by \( {g}_{1} \) is a homeomorphism, it follows easily that \( {g}_{1}{G}^{0} \) is a connected component of \( G \) . But since \( e \in {G}^{0} \), this means that \( {g}_{1} \in {g}_{1}{G}^{0} \) so \( {g}_{1}{G}^{0} \cap {G}^{1} \neq \varnothing \) . Since both are connected components, \( {G}^{1} = {g}_{1}{G}^{0} \) and the second statement is finished.
Returning to the first statement of the lemma, it clearly suffices to show that \( {G}^{0} \) is a subgroup. The inverse map is a homeomorphism, so \( {\left( {G}^{0}\right) }^{-1} \) is a connected component of \( G \) . As above, \( {\left( {G}^{0}\right) }^{-1} = {G}^{0} \) since both components contain \( e \) . Finally, if \( {g}_{1} \in {G}^{0} \), then the components \( {g}_{1}{G}^{0} \) and \( {G}^{0} \) both contain \( {g}_{1} \) since \( e,{g}_{1}^{-1} \in {G}^{0} \) . Thus \( {g}_{1}{G}^{0} = {G}^{0} \), and so \( {G}^{0} \) is a subgroup, as desired.
Theorem 1.18. If \( G \) is a Lie group and \( H \) a connected Lie subgroup so that \( G/H \) is also connected, then \( G \) is connected.
Proof. Since \( H \) is connected and contains \( e, H \subseteq {G}^{0} \), so there is a continuous map \( \pi : G/H \rightarrow G/{G}^{0} \) defined by \( \pi \left( {gH}\right) = g{G}^{0} \) . It is trivial that \( G/{G}^{0} \) has the discrete topology with respect to the quotient topology. The assumption that \( G/H \) is connected forces \( \pi \left( {G/H}\right) \) to be connected, and so \( \pi \left( {G/H}\right) = e{G}^{0} \) . However, \( \pi \) is a surjective map so \( G/{G}^{0} = e{G}^{0} \), which means \( G = {G}^{0} \) .
Definition 1.19. Let be \( G \) a Lie group and \( M \) a manifold.
(1) An action of \( G \) on \( M \) is a smooth map from \( G \times M \rightarrow M \), denoted by \( \left( {g, m}\right) \rightarrow \) \( g \cdot m \) for \( g \in G \) and \( m \in M \), so that:
(i) \( e \cdot m = m \), all \( m \in M \) and
(ii) \( {g}_{1} \cdot \left( {{g}_{2} \cdot m}\right) = \left( {{g}_{1}{g}_{2}}\right) \cdot m \) for all \( {g}_{1},{g}_{2} \in G \) and \( m \in M \) .
(2) The action is called transitive if for each \( m, n \in M \), there is a \( g \in G \), so \( g \cdot m = n \) .
(3) The stabilizer of \( m \in M \) is \( {G}^{m} = \{ g \in G \mid g \cdot m = m\} \) .
If \( G \) has a transitive action on \( M \) and \( {m}_{0} \in M \), then it is clear (Theorem 1.7) that the action of \( G \) on \( {m}_{0} \) induces a diffeomorphism from \( G/{G}^{{m}_{0}} \) onto \( M \) .
Theorem 1.20. The compact classical groups, \( {SO}\left( n\right) ,{SU}\left( n\right) \), and \( {Sp}\left( n\right) \), are connected.
Proof. Start with \( {SO}\left( n\right) \) and proceed by induction on \( n \) . As \( {SO}\left( 1\right) = \{ 1\} \), the case \( n = 1 \) is trivial. Next, observe that \( {SO}\left( n\right) \) has a transitive action on \( {S}^{n - 1} \) in \( {\mathbb{R}}^{n} \) by matrix multiplication. For \( n \geq 2 \), the stabilizer of the north pole, \( N = \left( {1,0,\ldots ,0}\right) \) , is easily seen to be isomorphic to \( {SO}\left( {n - 1}\right) \) which is connected by the induction hypothesis. From the transitive action, it follows that \( {SO}\left( n\right) /{SO}{\left( n\right) }^{N} \cong {S}^{n - 1} \) which is also connected. Thus Theorem 1.18 finishes the proof.
For \( {SU}\left( n\right) \), repeat the above argument with \( {\mathbb{R}}^{n} \) replaced by \( {\mathbb{C}}^{n} \) and start the induction with the fact that \( {SU}\left( 1\right) \cong {S}^{1} \) . For \( {Sp}\left( n\right) \), repeat the same argument with \( {\mathbb{R}}^{n} \) replaced by \( {\mathbb{H}}^{n} \) and start the induction with \( {Sp}\left( 1\right) \cong \{ v \in \mathbb{H}\left| \right| v \mid = 1\} \cong {S}^{3} \) .
## 1.2.2 Simply Connected Cover
For a connected Lie group \( G \), recall that the fundamental group, \( {\pi }_{1}\left( G\right) \), is the homotopy class of all loops at a fixed base point. The Lie group \( G \) is called simply connected if \( {\pi }_{1}\left( G\right) \) is trivial.
Standard covering theory from topology and differential geometry (see [69] and [8] or [88] for more detail) says that there exists a unique (up to isomorphism) simply connected cover \( \widetilde{G} \) of \( G \), i.e., a connected, s
|
Exercise 1.9 (a) Let \( A \subseteq {GL}\left( {n,\mathbb{R}}\right) \) be the subgroup of diagonal matrices with positive elements on the diagonal and let \( N \subseteq {GL}\left( {n,\mathbb{R}}\right) \) be the subgroup of upper triangular matrices with 1's on the diagonal. Using Gram-Schmidt orthogonalization, show multiplication induces a diffeomorphism of \( O\left( n\right) \times A \times N \) onto \( {GL}\left( {n,\mathbb{R}}\right) \). This is called the Iwasawa or \( {KAN} \) decomposition for \( {GL}\left( {n,\mathbb{R}}\right) \).
|
null
|
Exercise 2.5.3 Find all integer solutions to the equation \( {x}^{2} + {11} = {y}^{3} \) .
Solution. In the ring \( \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{11}}}\right) /2}\right\rbrack \), we can factor the equation as
\[
\left( {x - \sqrt{-{11}}}\right) \left( {x + \sqrt{-{11}}}\right) = {y}^{3}.
\]
Now, suppose that \( \delta \left| {\left( {x - \sqrt{-{11}}}\right) \text{and}\delta }\right| \left( {x + \sqrt{-{11}}}\right) \) (which implies that \( \delta \mid y) \) . Then \( \delta \left| {{2x}\text{and}\delta }\right| 2\sqrt{-{11}} \) which means that \( \delta \mid 2 \) because otherwise, \( \delta \mid \sqrt{-{11}} \), meaning that \( {11} \mid x \) and \( {11} \mid y \), which we can see is not true by considering congruences \( {\;\operatorname{mod}\;{11}^{2}} \) . Then \( \delta = 1 \) or 2, since 2 has no factorization in this ring. We will consider these cases separately.
Case 1. \( \delta = 1 \) .
Then the two factors of \( {y}^{3} \) are coprime and we can write
\[
\left( {x + \sqrt{-{11}}}\right) = \varepsilon {\left( \frac{a + b\sqrt{-{11}}}{2}\right) }^{3},
\]
where \( a, b \in \mathbb{Z} \) and \( a \equiv b\left( {\;\operatorname{mod}\;2}\right) \) . Since the units of \( \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{11}}}\right) /2}\right\rbrack \) are \( \pm 1 \), which are cubes, then we can bring the unit inside the brackets and rewrite the above without \( \varepsilon \) . We have
\[
8\left( {x + \sqrt{-{11}}}\right) = {\left( a + b\sqrt{-{11}}\right) }^{3} = {a}^{3} + {3a}{b}^{2}\sqrt{-{11}} - {33a}{b}^{2} - {11}{b}^{3}\sqrt{-{11}}
\]
and so, comparing real and imaginary parts, we get
\[
{8x} = {a}^{3} - {33a}{b}^{2} = a\left( {{a}^{2} - {33}{b}^{2}}\right) ,
\]
\[
8 = 3{a}^{2}b - {11}{b}^{3} = b\left( {3{a}^{2} - {11}{b}^{2}}\right) .
\]
This implies that \( b \mid 8 \) and so we have 8 possibilities: \( b = \pm 1, \pm 2, \pm 4, \pm 8 \) . Substituting these back into the equations to find \( a, x \), and \( y \), and remembering that \( a \equiv b\left( {\;\operatorname{mod}\;2}\right) \) and that \( a, x, y \in \mathbb{Z} \) will give all solutions to the equation.
Case 2. \( \delta = 2 \) .
If \( \delta = 2 \), then \( y \) is even and \( x \) is odd. We can write \( y = 2{y}_{1} \), which gives the equation
\[
\left( \frac{x + \sqrt{-{11}}}{2}\right) \left( \frac{x - \sqrt{-{11}}}{2}\right) = 2{y}_{1}^{3}.
\]
Since 2 divides the right-hand side of this equation, it must divide the left-hand side, so
\[
2\left| {\;\left( \frac{x + \sqrt{-{11}}}{2}\right) }\right.
\]
or
\[
2\left| {\;\left( \frac{x - \sqrt{-{11}}}{2}\right) .}\right.
\]
However, since \( x \) is odd,2 divides neither of the factors above. We conclude that \( \delta \neq 2 \), and thus we found all the solutions to the equation in our discussion of Case 1.
Exercise 2.5.4 Prove that \( \mathbb{Z}\left\lbrack \sqrt{3}\right\rbrack \) is Euclidean.
Solution. Given \( \alpha ,\beta \in \mathbb{Z}\left\lbrack \sqrt{3}\right\rbrack \) we want to find \( \gamma ,\delta \in \mathbb{Z}\left\lbrack \sqrt{3}\right\rbrack \) such that \( \alpha = {\beta \gamma } + \delta \), with \( N\left( \delta \right) < N\left( \beta \right) \) . Put another way, we want to show that \( N\left( {\alpha /\beta - \gamma }\right) < 1 \) . Let \( \alpha /\beta = x + y\sqrt{3}, x, y \in \mathbb{Q} \) . Let \( \gamma = u + v\sqrt{3} \), with \( u, v \in \mathbb{Z} \) .
Now, \( N\left( {\alpha /\beta - \gamma }\right) = \left| {{\left( x - u\right) }^{2} - 3{\left( y - v\right) }^{2}}\right| \) . This will be maximized when \( \left( {x - u}\right) \) is small and \( \left( {y - v}\right) \) is large. Choose for \( u \) and \( v \) the closest integers to \( x \) and \( y \), respectively. Then the minimum value for \( \left( {x - u}\right) \) is 0, while the maximum value for \( \left( {y - v}\right) \) is \( 1/2 \) . Then \( N\left( {\alpha /\beta - \gamma }\right) \leq \left| {-3/4}\right| < 1 \) . The conclusion follows.
Exercise 2.5.5 Prove that \( \mathbb{Z}\left\lbrack \sqrt{6}\right\rbrack \) is Euclidean.
Solution. Assume that \( \mathbb{Z}\left\lbrack \sqrt{6}\right\rbrack \) is not Euclidean. This means that there is at least one \( x + y\sqrt{6} \in \mathbb{Q}\left( \sqrt{6}\right) \) such that there is no \( \gamma = u + v\sqrt{6} \in \mathbb{Z}\left\lbrack \sqrt{6}\right\rbrack \) such that \( \left| {{\left( x - u\right) }^{2} - 6{\left( y - v\right) }^{2}}\right| < 1 \) . Without loss, we can suppose that \( 0 \leq x \leq 1/2 \), and \( 0 \leq y \leq 1/2 \) . We assert that there exist such a pair \( \left( {x, y}\right) \) such that
\[
{\left( x - u\right) }^{2} \geq 1 + 6{\left( y - v\right) }^{2}
\]
or
\[
6{\left( y - v\right) }^{2} \geq 1 + {\left( x - u\right) }^{2}
\]
for every \( u, v \in \mathbb{Z} \) . In particular, we will use the following inequalities:
\[
6{y}^{2} \geq 1 + {x}^{2}
\]
(2.1)
\[
\text{either (a)}\;{\left( 1 - x\right) }^{2} \geq 1 + 6{y}^{2}\;\text{or (b)}
\]
\[
6{y}^{2} \geq 1 + {\left( 1 - x\right) }^{2}
\]
(2.2)
\[
\text{either (a)}{\left( 1 + x\right) }^{2} \geq 1 + 6{y}^{2}\;\text{or (b)}
\]
\[
6{y}^{2} \geq 1 + {\left( 1 + x\right) }^{2}.
\]
(2.3)
If \( x = y = 0 \), then both first inequalities fail, so we can rule out this case. Next, we look at the first two inequalities on the left. Since \( {x}^{2},{\left( 1 - x\right) }^{2} \leq 1 \) and \( 1 + 6{y}^{2} \geq 1 \) and \( x, y \) are not both 0, these two inequalities fail so (2.1 (b)) and (2.2 (b)) must be true. Now consider (2.3 (a)). If \( {\left( 1 + x\right) }^{2} \geq 1 + 6{y}^{2} \) and \( 6{y}^{2} \geq 1 + {\left( 1 - x\right) }^{2} \) as we just showed, then
\[
{\left( 1 + x\right) }^{2} \geq 1 + 6{y}^{2} \geq 2 + {\left( 1 - x\right) }^{2}
\]
which implies that \( {4x} \geq 2 \) and since \( x \leq 1/2 \), we conclude that \( x = 1/2 \) . Substituting this into the previous inequalities, we get that
\[
\frac{9}{4} \geq 1 + 6{y}^{2} \geq \frac{9}{4},
\]
so \( 6{y}^{2} = \frac{5}{4} \) . Let \( y = r/s \) with \( \gcd \left( {r, s}\right) = 1 \) . We now have that \( {24}{r}^{2} = 5{s}^{2} \) . Since \( r \nmid s \), then \( {r}^{2} \mid 5 \), so \( r = 1 \) . But then \( {24} = 5{s}^{2} \), a contradiction. Therefore, (2.3 (b)) is true, which implies that
\[
6{y}^{2} \geq 1 + {\left( 1 + x\right) }^{2} \geq 2
\]
However, since \( y \leq 1/2,6{y}^{2} \geq 2 \) implies that \( 6 \geq 8 \), a contradiction. Then neither (2.3 (a)) nor (2.3 (b)) are true, so \( \mathbb{Z}\left\lbrack \sqrt{6}\right\rbrack \) must be Euclidean.
Exercise 2.5.6 Show that \( \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{19}}}\right) /2}\right\rbrack \) is not Euclidean for the norm map.
Solution. If a ring \( R \) is Euclidean, then given any \( \alpha ,\beta \in R \) we can find \( \delta ,\gamma \) such that \( \alpha = {\beta \gamma } + \delta \) with \( \delta = 0 \) or \( N\left( \delta \right) < N\left( \beta \right) \) . Another way of describing this condition is to say that given any \( \beta \in R \), we can find a representative for each nonzero residue class of \( R/\left( \beta \right) \) such that the representative has norm less than the norm of \( \beta \) . We will try to find an element of \( R = \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{19}}}\right) /2}\right\rbrack \) for which this is not true.
Consider \( \beta = 2.N\left( 2\right) = 4 \) . We want to find all other elements of \( R \) with norm strictly less than 4.
\[
N\left( \frac{a + b\sqrt{-{19}}}{2}\right) = \frac{{a}^{2} + {19}{b}^{2}}{4} < 4
\]
\[
\Rightarrow \;{a}^{2} + {19}{b}^{2} < {16}\text{.}
\]
First note that if \( b > 0 \), there are no solutions to this inequality. For \( b = 0 \) , we can have \( a = 0, \pm 2 \), since \( a \equiv b\left( {\;\operatorname{mod}\;2}\right) \) . Thus, there are just three elements with norm less than 4. However, there are more than three residue classes of \( R/\left( 2\right) \) (check this!). Therefore, the ring \( R = \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{19}}}\right) /2}\right\rbrack \) is non-Euclidean with respect to the norm map.
Exercise 2.5.7 Prove that \( \mathbb{Z}\left\lbrack \sqrt{-{10}}\right\rbrack \) is not a unique factorization domain.
Solution. Consider the elements \( 2 + \sqrt{-{10}},2 - \sqrt{-{10}},2,7 \) . Show that they are all irreducible and are not associates. Then note that
\[
\left( {2 + \sqrt{-{10}}}\right) \left( {2 - \sqrt{-{10}}}\right) = {14}
\]
\[
2 \cdot 7 = {14}\text{.}
\]
Exercise 2.5.8 Show that there are only finitely many rings \( \mathbb{Z}\left\lbrack \sqrt{d}\right\rbrack \) with \( d \equiv 2 \) or 3 (mod 4) which are norm Euclidean.
Solution. If \( \mathbb{Z}\left\lbrack \sqrt{d}\right\rbrack \) is Euclidean for the norm map, then for any \( \delta \in \mathbb{Q}\left( \sqrt{d}\right) \) , we can find \( \alpha \in \mathbb{Z}\left\lbrack \sqrt{d}\right\rbrack \) such that
\[
\left| {N\left( {\delta - \alpha }\right) }\right| < 1
\]
Write \( \delta = r + s\sqrt{d},\alpha = a + b\sqrt{d}, a, b \in \mathbb{Z}, r, s \in \mathbb{Q} \) . Then
\[
\left| {{\left( r - a\right) }^{2} - d{\left( s - b\right) }^{2}}\right| < 1.
\]
In particular, take \( r = 0, s = t/d \) where \( t \) is an integer to be chosen later.
Then
\[
\left| {{a}^{2} - d{\left( b - \frac{t}{d}\right) }^{2}}\right| < 1
\]
so that \( \left| {{\left( bd - t\right) }^{2} - d{a}^{2}}\right| < d \) . Since \( {\left( bd - t\right) }^{2} - d{a}^{2} \equiv {t}^{2}\left( {\;\operatorname{mod}\;d}\right) \), there are integers \( x \) and \( z \) such that
\[
{z}^{2} - d{x}^{2} \equiv {t}^{2}\;\left( {\;\operatorname{mod}\;d}\right)
\]
with \( \left| {{z}^{2} - d{x}^{2}}\right| < d \) .
In case \( d \equiv 3\left( {\;\operatorname{mod}\;4}\right) \), we choose an odd integer \( t \) such that
\[
{5d} < {t}^{2} < {6d}
\]
which we can do if \( d \) is sufficiently large. Then \( {z}^{2} - d{x}^{2} = {t}^{2} - {5d} \) or \( {t}^{2} - {6d} \) . Then one of the equations
|
Exercise 2.5.3 Find all integer solutions to the equation \( {x}^{2} + {11} = {y}^{3} \).
|
Solution. In the ring \( \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{11}}}\right) /2}\right\rbrack \), we can factor the equation as
\[
\left( {x - \sqrt{-{11}}}\right) \left( {x + \sqrt{-{11}}}\right) = {y}^{3}.
\]
Now, suppose that \( \delta \left| {\left( {x - \sqrt{-{11}}}\right) \text{and}\delta }\right| \left( {x + \sqrt{-{11}}}\right) \) (which implies that \( \delta \mid y) \) . Then \( \delta \left| {{2x}\text{and}\delta }\right| 2\sqrt{-{11}} \) which means that \( \delta \mid 2 \) because otherwise, \( \delta \mid \sqrt{-{11}} \), meaning that \( {11} \mid x \) and \( {11} \mid y \), which we can see is not true by considering congruences \( {\;\operatorname{mod}\;{11}^{2}} \) . Then \( \delta = 1 \) or 2, since 2 has no factorization in this ring. We will consider these cases separately.
Case 1. \( \delta = 1 \) .
Then the two factors of \( {y}^{3} \) are coprime and we can write
\[
\left( {x + \sqrt{-{11}}}\right) = \varepsilon {\left( \frac{a + b\sqrt{-{11}}}{2}\right) }^{3},
\]
where \( a, b \in \mathbb{Z} \) and \( a \equiv b\left( {\;\operatorname{mod}\;2}\right) \) . Since the units of \( \mathbb{Z}\left\lbrack {\left( {1 + \sqrt{-{11}}}\right) /2}\right\rbrack \) are \( \pm 1 \), which are cubes, then we can bring the unit inside the brackets and rewrite the above without \( \varepsilon \) . We have
\[
8\left( {x + \sqrt{-{11}}}\right) = {\left( a + b\sqrt{-{11}}\right) }^{3} = {a}^{3} + {3a}{b}^{2}\sqrt{-{11}} - {33a}{b}^{2} - {1
|
Lemma 5.3.13 The set \( D = \left\{ {\alpha < \kappa : {\bar{d}}_{\beta } \in {B}_{\alpha }}\right. \) for all \( \left. {\beta < \alpha }\right\} \) is closed unbounded.
Proof It is easy to see that \( D \) is closed. Let \( {\alpha }_{0} < \kappa \) . Build a sequence \( {\alpha }_{0} < {\alpha }_{1} < \ldots \) such that for all \( \beta < {\alpha }_{n},{\bar{d}}_{\beta } \in {B}_{{\alpha }_{n + 1}} \) . If \( \alpha = \sup {\alpha }_{i} \), then \( \alpha \in D \) .
Next, we make several applications of Fodor's Lemma.
Lemma 5.3.14 i) There is a stationary \( {S}^{\prime } \subseteq S \cap C \cap D \) and a Skolem term \( t \) such that \( {t}_{\alpha } = t \) for all \( \alpha \in {S}^{\prime } \) .
ii) There is \( \bar{c} \) a sequence from \( J \) and a stationary \( {S}^{\prime \prime } \subseteq {S}^{\prime } \) such that \( {\bar{c}}_{\alpha } = \bar{c} \) for \( \alpha \in {S}^{\prime \prime } \) .
## Proof
i) Because \( C \) and \( D \) are closed unbounded and \( S \) is stationary, \( S \cap C \cap D \) is stationary. Because there are only countably many terms, this follows from Lemma 5.3.9.
ii) Suppose that \( {\bar{c}}_{\alpha } = \left( {{c}_{\alpha ,1},\ldots ,{c}_{\alpha, m}}\right) \) . Each
\[
{c}_{\alpha, i} \in {J}_{ < \alpha } \subseteq {B}_{\alpha } = \alpha .
\]
Thus, the function \( \alpha \mapsto {c}_{\alpha, i} \) is regressive on \( {S}^{\prime } \) . By repeated applications of Fodor’s Lemma, we find \( {S}^{\prime } \supseteq {S}_{1}^{\prime } \supseteq \ldots \supseteq {S}_{m}^{\prime } \) and \( {c}_{i} < \kappa \) such that \( {c}_{\alpha, i} = {c}_{i} \) for \( \alpha \in {S}_{i}^{\prime } \) . Let \( {S}^{\prime \prime } = {S}_{m}^{\prime } \) and \( \bar{c} = \left( {{c}_{1},\ldots ,{c}_{m}}\right) \) .
Because there are only finitely many possible permutations of each sequence \( {\bar{d}}_{\alpha } \), by one further application of Corollary 5.3.9 and permuting the variables, we may assume that each \( {\bar{d}}_{\alpha } = \left( {{d}_{\alpha ,1},{d}_{\alpha ,2},\ldots ,{d}_{\alpha, n}}\right) \) where \( {d}_{\alpha ,1} < \ldots < {d}_{\alpha, n} \) . By replacing \( S \) with \( {S}^{\prime \prime } \), we may, without loss of generality, assume that \( S \subseteq C \cap D \) and there is a Skolem term \( t \) and \( \bar{c} \in J \) such that \( {a}_{\alpha } = {t}^{\mathcal{B}}\left( {\bar{c},{\bar{d}}_{\alpha }}\right) \) for all \( \alpha \in S \) .
Although \( S \) is not closed, it must contain a stationary set of limit points.
Lemma 5.3.15 The set \( {S}^{\prime } = \{ \alpha \in S : \alpha = \sup \left( {\alpha \cap S}\right) \} \) is stationary.
Proof The set \( X = \{ \alpha < \kappa : \alpha = \sup \left( {\alpha \cap S}\right) \} \) is closed unbounded and \( {S}^{\prime } = X \cap S \) .
In particular, \( {S}^{\prime } \neq \varnothing \) . For the remainder of the proof, we fix \( \delta \in {S}^{\prime } \) . In particular, \( \delta \in S \) and \( \delta \) is a limit point of elements of \( S \) .
Lemma 5.3.16 If \( \alpha \in S \) and \( \alpha < \delta \), then \( {\bar{d}}_{\alpha } \in {J}_{ < \delta } \) .
Proof Because \( \delta \) is a limit point of \( S \), there is \( \beta \in S \) with \( \alpha < \beta < \delta \) . Because \( \beta \in S,{\bar{d}}_{\alpha } \in {B}_{\beta } \) . By Lemma 5.3.11 i), \( {B}_{\beta } \cap J = {J}_{ < \beta } \) . Thus \( {\bar{d}}_{\alpha } \in {J}_{ < \beta } \subset {J}_{ < \delta } \)
Lemma 5.3.17 Let \( a \in {I}_{\delta } \) . There is \( x \in {J}_{ < \delta } \) and \( y \in {J}_{\delta } \) such that if \( {j}_{1},\ldots ,{j}_{n} \in J \) with \( x < {j}_{1} < \ldots < {j}_{n} < y \), then \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < a \) .
Proof Because \( \delta \in S,{A}_{\delta } = {B}_{\delta } \) and \( a \notin {B}_{\delta } \) . Let \( a = {s}^{\mathcal{B}}\left( {{x}_{1},\ldots ,{x}_{k},{y}_{1},\ldots ,{y}_{l}}\right) \) where \( s \) is a Skolem term, \( \bar{x} \in {J}_{ < \delta } \), and \( \bar{y} \in J \smallsetminus {J}_{ < \delta } \) . Note that \( l > 0 \) because \( a \notin {B}_{\delta } \) . Choose \( x \in {J}_{ < \delta } \) and \( y \in {J}_{\delta } \) such that \( x > \sup \left\{ {\bar{c},{x}_{1},\ldots ,{x}_{k}}\right\} \) and \( y < {y}_{i} \) for \( i = 1,\ldots, l \) . By indiscernibility, if \( {i}_{1} < \ldots < {i}_{n} \) and \( {j}_{1} < \ldots < {j}_{n} \) are two sequences from \( J \) with \( x < {i}_{1},{j}_{1} \) and \( {i}_{n},{j}_{n} < y \), then \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{i}}\right) < a \) if and only if \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < a \) .
Because \( \delta \) is a limit point of \( S \), we can find \( \alpha < \delta \) with \( \alpha \in S \) such that \( x < {d}_{\alpha ,1} \) and \( {d}_{\alpha, n} < y \) . But then \( {t}^{\mathcal{B}}\left( {\bar{c},{\bar{d}}_{\alpha }}\right) = {a}_{\alpha } < a \) and hence \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < a \) for all \( {j}_{1},\ldots ,{j}_{n} \in J \) with \( x < {j}_{1} < \ldots < {j}_{n} < y \) .
Finally, we will exploit the fact that because \( \delta \in S,{I}_{\delta } \cong {\omega }^{ * } \) and \( {J}_{\delta } \cong {\omega }_{1}^{ * } \) .
Lemma 5.3.18 \( i \) ) If \( {j}_{1},\ldots ,{j}_{n} \in {J}_{\delta } \) and \( {j}_{1} < \ldots < {j}_{n} \), then \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) > {a}_{\alpha } \) for \( \alpha \in S \) with \( \alpha < \delta \) .
ii) There are \( {j}_{1} < \ldots < {j}_{n} \) in \( {J}_{\delta } \) such that \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < a \) for all \( a \in {I}_{\delta } \) . Proof
i) Because \( \delta \in S \) and \( \alpha < \delta ,{\bar{d}}_{\alpha } \in {B}_{\delta } \) . Because, by Lemma 5.3.11 i), \( {B}_{\delta } \cap J = {J}_{ < \delta },{d}_{\delta ,1} \notin {J}_{ < \delta } \) . Thus \( {d}_{\alpha, n} < {d}_{\delta ,1} \) .
Because
\[
{a}_{\alpha } = {t}^{\mathcal{B}}\left( {\bar{c},{\bar{d}}_{\alpha }}\right) < {t}^{\mathcal{B}}\left( {\bar{c},{\bar{d}}_{\delta }}\right)
\]
by indiscernibility
\[
{a}_{\alpha } = {t}^{\mathcal{B}}\left( {\bar{c},{\bar{d}}_{\alpha }}\right) < {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right)
\]
ii) Let \( {z}_{0} > {z}_{1} > \ldots \) be a cofinal descending sequence in \( {I}_{\delta } \) . For each \( i \) , we find \( {x}_{i} \in {J}_{ < \delta } \) and \( {y}_{i} \in {J}_{\delta } \) such that \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < {z}_{i} \) for all \( {j}_{1},\ldots ,{j}_{n} \in J \) with \( {x}_{i} < {j}_{1} < \ldots < {j}_{n} < {y}_{n} \) . Because \( {J}_{\delta } \) has order type \( {\omega }_{1}^{ * } \), we can find \( {j}_{1},\ldots ,{j}_{n} \in {J}_{\delta } \) such that \( {x}_{i} < {j}_{1} < \ldots < {j}_{n} < {y}_{i} \) for all \( i < n \) . Thus, \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < {a}_{i} \) for \( i = 0,1,2,\ldots \) Thus, \( {t}^{\mathcal{B}}\left( {\bar{c},\bar{j}}\right) < a \) for all \( a \in {I}_{\delta } \) .
Thus, there is an element of \( \mathcal{M} \) that is above all of the elements of \( {I}_{ < \delta } \) but below all of the elements of \( {I}_{\delta } \) . Because \( \mathcal{A} \) is the Skolem hull of \( I \) , this violates Lemma 5.3.11 ii). Thus, \( {\mathcal{M}}^{A} \) and \( {\mathcal{M}}^{B} \) are not isomorphic as \( \mathcal{L} \) -structures.
In this proof, we needed \( \kappa > {\aleph }_{1} \) so we could use the ordering \( {\omega }_{1}^{ * } \) and still have \( \left| {A}_{\alpha }\right| < \kappa \) . More care is needed to prove the theorem when \( \kappa = {\aleph }_{1} \) .
## 5.4 An Independence Result in Arithmetic
Gödel’s famous Incompleteness Theorem asserts that there are sentences \( \phi \) in the language of arithmetic such that \( \phi \) is true in the natural numbers but unprovable from the Peano Axioms for arithmetic. Indeed, for any consistent recursive extension \( T \) of Peano arithmetic, we can find a sentence that is independent from \( T \) . The original independent sentences were self-referential sentences that asserted their own unprovability or metamathematical sentences asserting the consistency of the theory. People wondered whether the independent statements could be made more "mathematical." In the late 1970s, Paris and Harrington [73] showed that a slight variant of the finite version of Ramsey's Theorem is true but unprovable in Peano arithmetic. The proof is an interesting application of indiscernibles.
We begin with the combinatorial statement.
Theorem 5.4.1 (Paris-Harrington Principle) For all natural numbers \( n, k, m \), there is a number \( l \) such that if \( f : {\left\lbrack l\right\rbrack }^{n} \rightarrow k \), then there is \( Y \subseteq l \) such that \( Y \) is homogeneous for \( f,\left| Y\right| \geq m \), and if \( {y}_{0} \) is the least element of \( Y \), then \( \left| Y\right| \geq {y}_{0} \) .
Proof We argue as in the proof of the finite version of Ramsey's Theorem. Suppose that there is no such \( l \) . For \( l < \omega \), let \( {T}_{l} = \left\{ {f : {\left\lbrack \{ 0,\ldots, l - 1\} \right\rbrack }^{n} \rightarrow }\right. \) \( k \) : there is no \( Y \) homogeneous for \( f \) with \( \left. {\left| Y\right| \geq m,\min Y}\right\} \) . Clearly, each \( {T}_{l} \) is finite, and if \( f \in {T}_{l + 1} \) there is a unique \( g \in {T}_{l} \) such that \( g \subset f \) . Thus, if we order \( T = \bigcup {T}_{l} \) by inclusion, we get a finite branching tree. Because each \( {T}_{l} \) is nonempty, \( T \) is an infinite finite branching tree and by König’s Lemma there is \( {f}_{0} \subset {f}_{1} \subset {f}_{2} \subset \ldots \) with \( {f}_{i} \in {T}_{i} \) .
Let \( f = \bigcup {f}_{i} \) . Then \( f : {\left\lbrack \mathbb{N}\right\rbrack }^{n} \rightarrow k \) . By Ramsey’s Theorem, there is an infinite \( X \subseteq \mathbb{N} \) homogeneous for \( f \) . Let \( {x}_{1} \) be the least element of \( X \) , and ch
|
Lemma 5.3.13 The set \( D = \left\{ {\alpha < \kappa : {\bar{d}}_{\beta } \in {B}_{\alpha }}\right. \) for all \( \left. {\beta < \alpha }\right\} \) is closed unbounded.
|
It is easy to see that \( D \) is closed. Let \( {\alpha }_{0} < \kappa \) . Build a sequence \( {\alpha }_{0} < {\alpha }_{1} < \ldots \) such that for all \( \beta < {\alpha }_{n},{\bar{d}}_{\beta } \in {B}_{{\alpha }_{n + 1}} \) . If \( \alpha = \sup {\alpha }_{i} \), then \( \alpha \in D \) .
|
Proposition 19.8 Suppose \( \rho \) is a density matrix on \( \mathbf{H} \) . Then the map \( {\Phi }_{\rho } : \mathcal{B}\left( \mathbf{H}\right) \rightarrow \mathbb{C} \) given by
\[
{\Phi }_{\rho }\left( A\right) = \operatorname{trace}\left( {\rho A}\right) = \operatorname{trace}\left( {A\rho }\right)
\]
is a family of expectation values.
Proof. If we define \( {\Phi }_{\rho }\left( A\right) = \operatorname{trace}\left( {\rho A}\right) \), then \( {\Phi }_{\rho }\left( I\right) = \operatorname{trace}\left( \rho \right) = 1 \) . For any \( A \in \mathcal{B}\left( \mathbf{H}\right) \), we have,
\[
\operatorname{trace}\left( {\rho {A}^{ * }}\right) = \operatorname{trace}\left( {{A}^{ * }\rho }\right) = \operatorname{trace}\left( {\left( \rho A\right) }^{ * }\right) = \overline{\operatorname{trace}\left( {\rho A}\right) }.
\]
It follows that \( \operatorname{trace}\left( {\rho A}\right) \) is real when \( A \) is self-adjoint. Let \( {\rho }^{1/2} \) be the nonnegative self-adjoint square root of \( \rho \) . Then \( {\rho }^{1/2} \) and \( A{\rho }^{1/2} \) are Hilbert-Schmidt (in the latter case, by Point 3 of Proposition 19.3). It follows that \( \operatorname{trace}\left( {A{\rho }^{1/2}{\rho }^{1/2}}\right) = \operatorname{trace}\left( {{\rho }^{1/2}A{\rho }^{1/2}}\right) \), by Proposition 19.5. Thus, if \( A \) is self-adjoint and non-negative,
\[
\operatorname{trace}\left( {\rho A}\right) = \operatorname{trace}\left( {{\rho }^{1/2}{\rho }^{1/2}A}\right) = \operatorname{trace}\left( {{\rho }^{1/2}A{\rho }^{1/2}}\right) \geq 0,
\]
(19.1)
because \( {\rho }^{1/2}A{\rho }^{1/2} \) is self-adjoint and non-negative. We have established that \( {\Phi }_{\rho } \) satisfies Points 1,2, and 3 of Definition 19.6.
Meanwhile, suppose \( {A}_{n}\psi \) converges in norm to \( {A\psi } \), for each \( \psi \) in \( \mathbf{H} \) . Then \( \begin{Vmatrix}{{A}_{n}\psi }\end{Vmatrix} \) is bounded as a function of \( n \) for each fixed \( \psi \) . Thus, by the principle of uniform boundedness (Theorem A.40), there is a constant \( C \) such that \( \begin{Vmatrix}{A}_{n}\end{Vmatrix} \leq C \) . Now, if \( \left\{ {e}_{j}\right\} \) is an orthonormal basis for \( \mathbf{H} \), we have
\[
\left| \left\langle {{e}_{j},{\rho }^{1/2}{A}_{n}{\rho }^{1/2}{e}_{j}}\right\rangle \right| = \left| \left\langle {{\rho }^{1/2}{e}_{j},{A}_{n}{\rho }^{1/2}{e}_{j}}\right\rangle \right| \leq C{\begin{Vmatrix}{\rho }^{1/2}{e}_{j}\end{Vmatrix}}^{2},
\]
and,
\[
\mathop{\sum }\limits_{j}{\begin{Vmatrix}{\rho }^{1/2}{e}_{j}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{j}\left\langle {{\rho }^{1/2}{e}_{j},{\rho }^{1/2}{e}_{j}}\right\rangle = \mathop{\sum }\limits_{j}\left\langle {{e}_{j},\rho {e}_{j}}\right\rangle = \operatorname{trace}\left( \rho \right) < \infty .
\]
Furthermore, since \( {A}_{n}\left( {{\rho }^{1/2}{e}_{j}}\right) \) converges to \( A\left( {{\rho }^{1/2}{e}_{j}}\right) \) for each \( j \), dominated convergence tells us that
\[
\operatorname{trace}\left( {{\rho }^{1/2}A{\rho }^{1/2}}\right) = \mathop{\sum }\limits_{j}\left\langle {{e}_{j},{\rho }^{1/2}A{\rho }^{1/2}{e}_{j}}\right\rangle
\]
\[
= \mathop{\lim }\limits_{{n \rightarrow \infty }}\mathop{\sum }\limits_{j}\left\langle {{e}_{j},{\rho }^{1/2}{A}_{n}{\rho }^{1/2}{e}_{j}}\right\rangle
\]
\[
= \mathop{\lim }\limits_{{n \rightarrow \infty }}\operatorname{trace}\left( {{\rho }^{1/2}{A}_{n}{\rho }^{1/2}}\right) .
\]
As in (19.1), we can shift the second factor of \( {\rho }^{1/2} \) to the front of the trace to obtain Point 4 in Definition 19.6. ∎
Theorem 19.9 For any family of expectation values \( \Phi : \mathcal{B}\left( \mathbf{H}\right) \rightarrow \mathbb{C} \), there is a unique density matrix \( \rho \) such that \( \Phi \left( A\right) = \operatorname{trace}\left( {\rho A}\right) \) for all \( A \in \mathcal{B}\left( \mathbf{H}\right) \) .
Proof. Recall from Sect. 3.12 the Dirac notation, in which the expression \( \left| {\phi \rangle \langle \psi }\right| \) denotes the linear operator taking any vector \( \chi \in \mathbf{H} \) to the vector \( \left| {\phi \rangle \langle \psi }\right| \chi \rangle \) (in physics notation), that is, the vector \( \langle \psi ,\chi \rangle \phi \) (in math notation). If \( \rho \) is trace class, then by Exercise 2,
\[
\operatorname{trace}\left( {\rho \left| {\phi \rangle \langle \psi }\right| }\right) = \langle \psi ,{\rho \phi }\rangle
\]
Thus, if an operator \( \rho \) with the desired properties is to exist, we must have
\[
\langle \psi ,{\rho \phi }\rangle = \Phi \left( \left| {\phi \rangle \langle \psi }\right| \right) .
\]
Now, by Exercise \( 3,\Phi \) satisfies \( \parallel \Phi \left( A\right) \parallel \leq \parallel A\parallel \) . From this, we can see that the map
\[
{L}_{\Phi }\left( {\phi ,\psi }\right) \mathrel{\text{:=}} \Phi \left( \left| {\phi \rangle \langle \psi }\right| \right)
\]
is a bounded sesquilinear form, so that (by Proposition A.63), there is a unique bounded operator \( \rho \) such that \( \Phi \left( \left| {\phi \rangle \langle \psi }\right| \right) = \langle \psi ,{\rho \phi }\rangle \) for all \( \phi \) and \( \psi \) . Since \( \left| {\phi \rangle \langle \phi }\right| \) is self-adjoint and non-negative, \( {L}_{\Phi }\left( {\phi ,\phi }\right) \) is real and non-negative, which means that \( \rho \) is self-adjoint (by Proposition A.63) and non-negative.
Meanwhile, if \( \left\{ {e}_{j}\right\} \) is an orthonormal basis for \( \mathbf{H} \), then by Definition 19.2,
\[
\operatorname{trace}\left( \rho \right) = \mathop{\lim }\limits_{{N \rightarrow \infty }}\mathop{\sum }\limits_{{j = 1}}^{N}\left\langle {{e}_{j},\rho {e}_{j}}\right\rangle
\]
\[
= \mathop{\lim }\limits_{{N \rightarrow \infty }}\Phi \left( {\left| {e}_{1}\right\rangle \left\langle {{e}_{1}\left| {+\cdots + }\right| {e}_{N}}\right\rangle \left\langle {e}_{N}\right| }\right)
\]
\[
= \Phi \left( I\right) = 1\text{.}
\]
In passing from the second line to the third, we have used Point 4 of Definition 19.6. Thus, \( \rho \) is a density matrix.
We have now found a density matrix \( \rho \) such that \( \Phi \left( \left| {\phi \rangle \langle \psi }\right| \right) \) agrees with \( \operatorname{trace}\left( {\rho \left| {\phi \rangle \langle \psi }\right| }\right) \) for all \( \phi ,\psi \in \mathbf{H} \) . By linearity, \( \Phi \left( A\right) = \operatorname{trace}\left( {\rho A}\right) \) for all finite-rank operators \( A \) (see Exercise 4). Now, if \( \left\{ {e}_{j}\right\} \) is an orthonormal basis for \( \mathbf{H} \), let \( {P}_{N} \) be the orthogonal projection onto the span of \( {e}_{1},\ldots ,{e}_{N} \) . Then for any \( A \in \mathcal{B}\left( \mathbf{H}\right) \), the operator \( {P}_{N}A \) has finite rank and \( {P}_{N}{A\psi } \rightarrow {A\psi } \) for all \( \psi \in \mathbf{H} \) . Thus, for all \( A \in \mathcal{B}\left( \mathbf{H}\right) \) ,
\[
\Phi \left( A\right) = \mathop{\lim }\limits_{{N \rightarrow \infty }}\Phi \left( {{P}_{N}A}\right) = \mathop{\lim }\limits_{{N \rightarrow \infty }}\operatorname{trace}\left( {\rho {P}_{N}A}\right) = \operatorname{trace}\left( {\rho A}\right) ,
\]
by Proposition 19.8 -
Our next result shows that our new notion of the state of a system includes our old notion.
Proposition 19.10 For any unit vector \( \psi \in \mathbf{H} \), let \( \left| {\psi \rangle \langle \psi }\right| \), in accordance with Notation 3.29, denote the orthogonal projection onto the span of \( \psi \) . Then \( \left| {\psi \rangle \langle \psi }\right| \) is a density matrix and for all \( A \in \mathcal{B}\left( \mathbf{H}\right) \), we have
\[
\operatorname{trace}\left( {\left| {\psi \rangle \langle \psi }\right| A}\right) = \langle \psi ,{A\psi }\rangle
\]
Note that if \( {\psi }_{2} = {e}^{i\theta }{\psi }_{1,} \) then \( \left| {\psi }_{1}\right\rangle \left\langle {\psi }_{1}\right| = \left| {\psi }_{2}\right\rangle \left\langle {\psi }_{2}\right| \) . Thus, from our new point of view, we may say that the reason \( {\psi }_{1} \) and \( {\psi }_{2} \) represent the same "physical state" is that they determine the same density matrix.
Proof. Since it is an orthogonal projection, \( \left| {\psi \rangle \langle \psi }\right| \) is bounded, self-adjoint, and non-negative. To compute its trace, we choose an orthonormal basis
\( \left\{ {e}_{j}\right\} \) for \( \mathbf{H} \) with \( {e}_{1} = \psi \), which gives \( \operatorname{trace}\left( \left| {\psi \rangle \langle \psi }\right| \right) = 1 \) . Using the same orthonormal basis, we compute that, for any \( A \in \mathcal{B}\left( \mathbf{H}\right) \) ,
\[
\operatorname{trace}\left( {\left| {\psi \rangle \langle \psi }\right| A}\right) = \mathop{\sum }\limits_{j}\left\langle {{e}_{j},\psi }\right\rangle \left\langle {\psi, A{e}_{j}}\right\rangle = \langle \psi ,{A\psi }\rangle
\]
as desired.
Definition 19.11 A density matrix \( \rho \in \mathcal{B}\left( \mathbf{H}\right) \) is a pure state if there exists a unit vector \( \psi \in \mathbf{H} \) such that \( \rho \) is equal to the orthogonal projection onto the span of \( \psi \) . The density matrix \( \rho \) is called a mixed state if no such unit vector \( \psi \) exists.
An isolated system that is in a pure state initially will remain in a pure state for all later times, since the initial state \( {\psi }_{0} \) evolves to the pure state \( {e}^{-i\widehat{H}t/\hslash }{\psi }_{0} \), where \( \widehat{H} \) is the Hamiltonian for the system. But if a system is interacting with its environment, then as discussed in Sect. 19.5, the system may move into a mixed state at a later time.
There are several different ways of characterizing the pure states as a subset of the density matrices. First, it is not hard to see (Exercise 6) that a density matrix \( \rho \) is a pure state if and only if \( \operatorname{trace}\left( {\rho }^{2}\right) = 1 \) . Second, the set of density matrices is a convex set, since if \( {\rho }_{1
|
Proposition 19.8 Suppose \( \rho \) is a density matrix on \( \mathbf{H} \). Then the map \( {\Phi }_{\rho } : \mathcal{B}\left( \mathbf{H}\right) \rightarrow \mathbb{C} \) given by
\[
{\Phi }_{\rho }\left( A\right) = \operatorname{trace}\left( {\rho A}\right) = \operatorname{trace}\left( {A\rho }\right)
\]
is a family of expectation values.
|
Proof. If we define \( {\Phi }_{\rho }\left( A\right) = \operatorname{trace}\left( {\rho A}\right) \), then \( {\Phi }_{\rho }\left( I\right) = \operatorname{trace}\left( \rho \right) = 1 \) . For any \( A \in \mathcal{B}\left( \mathbf{H}\right) \), we have,
\[
\operatorname{trace}\left( {\rho {A}^{ * }}\right) = \operatorname{trace}\left( {{A}^{ * }\rho }\right) = \operatorname{trace}\left( {\left( \rho A\right) }^{ * }\right) = \overline{\operatorname{trace}\left( {\rho A}\right) }.
\]
It follows that \( \operatorname{trace}\left( {\rho A}\right) \) is real when \( A \) is self-adjoint. Let \( {\rho }^{1/2} \) be the nonnegative self-adjoint square root of \( \rho \) . Then \( {\rho }^{1/2} \) and \( A{\rho }^{1/2} \) are Hilbert-Schmidt (in the latter case, by Point 3 of Proposition 19.3). It follows that \( \operatorname{trace}\left( {A{\rho }^{1/2}{\rho }^{1/2}}\right) = \operatorname{trace}\left( {{\rho }^{1/2}A{\rho }^{1/2}}\right) \), by Proposition 19.5. Thus, if \( A \) is self-adjoint and non-negative,
\[
\operatorname{trace}\left( {\rho A}\right) = \operatorname{trace}\left( {{\rho }^{1/2}{\rho }^{1/2}A}\right) = \operatorname{trace}\left( {{\rho }^{1/2}A{\rho }^{1/2}}\right) \geq 0,
\]
because \( {\rho }^{1/2}A{\rho }^{1/2} \) is self-adjoint and non-negative. We have established that \( {\Phi }_{\rho } \) satisfies Points 1,2, and 3 of Definition 19.6.
Meanwhile, suppose \( {A}_{n}\psi \) converges in norm to \( {A\psi } \), for each \( \psi \) in \( \mathbf{H} \) . Then \( \begin{Vmatrix}{{A}_{n}\psi }\end{Vmatrix} \) is bounded as a function of \( n \) for each fixed \( \psi \) . Thus, by the principle of uniform boundedness (Theorem A.40), there is a constant \( C \) such that \( \begin{Vmatrix}{A}_{n}\end{Vmatrix} \leq C \) . Now, if \( \left\{ {e}_{j}\right\} \) is an orthonormal basis for \( \mathbf{H}
|
Exercise 2.29. Given a Sudoku game and a solution \( \bar{x} \), formulate as an integer linear program the problem of certifying that \( \bar{x} \) is the unique solution.
Exercise 2.30 (Crucipixel Game). Given a \( m \times n \) grid, the purpose of the game is to darken some of the cells so that in every row (resp. column) the darkened cells form distinct strings of the lengths and in the order prescribed by the numbers on the left of the row (resp. on top of the column).
Two strings are distinct if they are separated by at least one white cell. For instance, in the figure below the tenth column must contain a string of length 6 followed by some white cells and then a sting of length 2 . The game consists in darkening the cells to satisfy the requirements.

- Formulate the game as an integer linear program.
- Formulate the problem of certifying that a given solution is unique as an integer linear program.
- Play the game in the figure.
Exercise 2.31. Let \( P = \left\{ {{A}_{1}x \leq {b}_{1}}\right\} \) be a polytope and \( S = \left\{ {{A}_{2}x < {b}_{2}}\right\} \) . Formulate the problem of maximizing a linear function over \( P \smallsetminus S \) as a mixed 0,1 program.
Exercise 2.32. Consider continuous variables \( {y}_{j} \) that can take any value between 0 and \( {u}_{j} \), for \( j = 1,\ldots, k \) . Write a set of mixed integer linear constraints to impose that at most \( \ell \) of the \( k \) variables \( {y}_{j} \) can take a nonzero value. [Hint: use \( k \) binary variables \( {x}_{j} \in \{ 0,1\} \) .] Either prove that your formulation is perfect, in the spirit of Proposition 2.6, or give an example showing that it is not.
Exercise 2.33. Assume \( c \in {\mathbb{Z}}^{n}, A \in {\mathbb{Z}}^{m \times n}, b \in {\mathbb{Z}}^{m} \) . Give a polynomial transformation of the 0,1 linear program
\( \max \;{cx} \)
\[
{Ax} \leq b
\]
\[
x \in \{ 0,1{\} }^{n}
\]
into a quadratic program
\[
\max \;{cx} - M{x}^{T}\left( {1 - x}\right)
\]
\[
{Ax} \leq b
\]
\[
0 \leq x \leq 1,
\]
i.e., show how to choose the scalar \( M \) as a function of \( A, b \) and \( c \) so that an optimal solution of the quadratic program is always an optimal solution of the 0,1 linear program (if any).

The authors working on Chap. 2

Giacomo Zambelli at the US border. Immigration Officer: What is the purpose of your trip? Giacomo: Visiting a colleague; I am a mathematician. Immigration Officer: What do mathematicians do? Giacomo: Sit in a chair and think.
## Chapter 3
## Linear Inequalities and Polyhedra
The focus of this chapter is on the study of systems of linear inequalities \( {Ax} \leq b \) . We look at this subject from two different angles. The first, more algebraic, addresses the issue of solvability of \( {Ax} \leq b \) . The second studies the geometric properties of the set of solutions \( \left\{ {x \in {\mathbb{R}}^{n} : {Ax} \leq b}\right\} \) of such systems. In particular, this chapter covers Fourier's elimination procedure, Farkas' lemma, linear programming, the theorem of Minkowski-Weyl, polarity, Carathéorory's theorem, projections and minimal representations of the set \( \left\{ {x \in {\mathbb{R}}^{n} : {Ax} \leq b}\right\} \) .
## 3.1 Fourier Elimination
The most basic question concerning a system of linear inequalities is whether or not it has a solution. Fourier [145] devised a simple method to address this problem. Fourier's method is similar to Gaussian elimination, in that it performs row operations to eliminate one variable at a time.
Let \( A \in {\mathbb{R}}^{m \times n} \) and \( b \in {\mathbb{R}}^{m} \), and suppose we want to determine if the system \( {Ax} \leq b \) has a solution. We first reduce this question to one about a system with \( n - 1 \) variables. Namely, we determine necessary and sufficient conditions for which, given a vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \in {\mathbb{R}}^{n - 1} \), there exists \( {\bar{x}}_{n} \in \mathbb{R} \) such that \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n}}\right) \) satisfies \( {Ax} \leq b \) . Let \( I \mathrel{\text{:=}} \{ 1,\ldots, m\} \) and define
\[
{I}^{ + } \mathrel{\text{:=}} \left\{ {i \in I : {a}_{in} > 0}\right\} ,\;{I}^{ - } \mathrel{\text{:=}} \left\{ {i \in I : {a}_{in} < 0}\right\} ,\;{I}^{0} \mathrel{\text{:=}} \left\{ {i \in I : {a}_{in} = 0}\right\} .
\]
(C) Springer International Publishing Switzerland 2014
M. Conforti et al., Integer Programming, Graduate Texts
in Mathematics 271, DOI 10.1007/978-3-319-11008-0_3
Dividing the \( i \) th row by \( \left| {a}_{in}\right| \) for each \( i \in {I}^{ + } \cup {I}^{ - } \), we obtain the following system, which is equivalent to \( {Ax} \leq b \) :
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{x}_{j}\; + {x}_{n}\; \leq {b}_{i}^{\prime },\;i \in {I}^{ + }
\]
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{x}_{j}\; - {x}_{n}\; \leq {b}_{i}^{\prime },\;i \in {I}^{ - }
\]
(3.1)
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}{x}_{j}\; \leq {b}_{i},\;i \in {I}^{0}
\]
where \( {a}_{ij}^{\prime } = {a}_{ij}/\left| {a}_{in}\right| \) and \( {b}_{i}^{\prime } = {b}_{i}/\left| {a}_{in}\right| \) for \( i \in {I}^{ + } \cup {I}^{ - } \) .
For each pair \( i \in {I}^{ + } \) and \( k \in {I}^{ - } \), we sum the two inequalities indexed by \( i \) and \( k \), and we add the resulting inequality to the system (3.1). Furthermore, we remove the inequalities indexed by \( {I}^{ + } \) and \( {I}^{ - } \) . This way, we obtain the following system:
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}\left( {{a}_{ij}^{\prime } + {a}_{kj}^{\prime }}\right) {x}_{j} \leq {b}_{i}^{\prime } + {b}_{k}^{\prime },\;i \in {I}^{ + }, k \in {I}^{ - },
\]
(3.2)
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}{x}_{j} \leq {b}_{i},\;i \in {I}^{0}.
\]
If \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1},{\bar{x}}_{n}}\right) \) satisfies \( {Ax} \leq b \), then \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies (3.2). The next theorem states that the converse also holds.
Theorem 3.1. A vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies the system (3.2) if and only if there exists \( {\bar{x}}_{n} \) such that \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1},{\bar{x}}_{n}}\right) \) satisfies \( {Ax} \leq b \) .
Proof. We already remarked the "if" statement. For the converse, assume there is a vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfying (3.2). Note that the first set of inequalities in (3.2) can be rewritten as
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{x}_{j} - {b}_{k}^{\prime } \leq {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{x}_{j},\;i \in {I}^{ + }, k \in {I}^{ - }.
\]
(3.3)
Let \( l : = \mathop{\max }\limits_{{k \in {I}^{ - }}}\{ \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{\bar{x}}_{j} - {b}_{k}^{\prime }\} \) and \( u : = \mathop{\min }\limits_{{i \in {I}^{ + }}}\{ {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{\bar{x}}_{j}\} , \) where we define \( l \mathrel{\text{:=}} - \infty \) if \( {I}^{ - } = \varnothing \) and \( u \mathrel{\text{:=}} + \infty \) if \( {I}^{ + } = \varnothing \) . Since \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies (3.3), we have that \( l \leq u \) . Therefore, for any \( {\bar{x}}_{n} \) such that \( l \leq {\bar{x}}_{n} \leq u \), the vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n}}\right) \) satisfies the system (3.1), which is equivalent to \( {Ax} \leq b \) .
Therefore, the problem of finding a solution to \( {Ax} \leq b \) is reduced to finding a solution to (3.2), which is a system of linear inequalities in \( n - 1 \) variables. Fourier's elimination method is:
Given a system of linear inequalities \( {Ax} \leq b \), let \( {A}^{n} \mathrel{\text{:=}} A,{b}^{n} \mathrel{\text{:=}} b \) ;
For \( i = n,\ldots ,1 \), eliminate variable \( {x}_{i} \) from \( {A}^{i}x \leq {b}^{i} \) with the above procedure to obtain system \( {A}^{i - 1}x \leq {b}^{i - 1} \) .
System \( {A}^{1}x \leq {b}^{1} \), which involves variable \( {x}_{1} \) only, is of the type, \( {x}_{1} \leq {b}_{p}^{1} \) , \( p \in P, - {x}_{1} \leq {b}_{q}^{1}, q \in N \), and \( 0 \leq {b}_{i}^{1}, i \in Z \) .
System \( {A}^{0}x \leq {b}^{0} \) has the following inequalities: \( 0 \leq {b}_{pq}^{0} \mathrel{\text{:=}} {b}_{p}^{1} + {b}_{q}^{1} \) , \( p \in P, q \in N,0 \leq {b}_{i}^{0} \mathrel{\text{:=}} {b}_{i}^{1}, i \in Z. \)
Applying Theorem 3.1, we obtain that \( {Ax} \leq b \) is feasible if and only if \( {A}^{0}x \leq {b}^{0} \) is feasible, and this happens when the \( {b}_{pq}^{0} \) and \( {b}_{i}^{0} \) are all nonnegative.
## Remark 3.2.
(i) At each iteration, Fourier’s method removes \( \left| {I}^{ + }\right| + \left| {I}^{ - }\right| \) inequalities and adds \( \left| {I}^{ + }\right| \times \left| {I}^{ - }\right| \) inequalities, hence the number of inequalities may roughly be squared at each iteration. Thus, after eliminating \( p \) variables, the number of inequalities may be exponential in p.
(ii) If matrix \( A \) and vector \( b \) have only rational entries, then all coefficients in (3.2) are rational.
(iii) Every inequality of \( {A}^{i}x \leq {b}^{i} \) is a nonnegative combination of inequalities of \( {Ax} \leq b \) .
Example 3.3. Consider the system \( {A}^{3}x \leq {b}^{3} \) of linear inequalities in three variables
\[
- {x}_{2} \leq - 1
\]
\[
\text{-}{x}_{1} - {x}_{2}
\]
\[
\begin{matrix} & - & {x}_{1} & & & & & & & & & & & &
|
Theorem 3.1. A vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies the system (3.2) if and only if there exists \( {\bar{x}}_{n} \) such that \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1},{\bar{x}}_{n}}\right) \) satisfies \( {Ax} \leq b \) .
|
Proof. We already remarked the "if" statement. For the converse, assume there is a vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfying (3.2). Note that the first set of inequalities in (3.2) can be rewritten as
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{x}_{j} - {b}_{k}^{\prime } \leq {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{x}_{j},\;i \in {I}^{ + }, k \in {I}^{ - }.
\]
(3.3)
Let \( l : = \mathop{\max }\limits_{{k \in {I}^{ - }}}\{ \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{\bar{x}}_{j} - {b}_{k}^{\prime }\} \) and \( u : = \mathop{\min }\limits_{{i \in {I}^{ + }}}\{ {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{\bar{x}}_{j}\} , \) where we define \( l \mathrel{\text{:=}} - \infty \) if \( {I}^{ - } = \varnothing \) and \( u \mathrel{\text{:=}} + \infty \) if \( {I}^{ + } = \varnothing \) . Since \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies (3.3), we have that \( l \leq u \) . Therefore, for any \( {\bar{x}}_{n} \) such that \( l \leq {\bar{x}}_{n} \leq u \), the vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n}}\right) \) satisfies the system (3.1), which is equivalent to \( {Ax} \leq b \) .
|
Proposition 3.7.1. The period 2 elliptic points of \( {\Gamma }_{0}\left( N\right) \) are in bijective correspondence with the ideals \( J \) of \( \mathbb{Z}\left\lbrack i\right\rbrack \) such that \( \mathbb{Z}\left\lbrack i\right\rbrack /J \cong \mathbb{Z}/N\mathbb{Z} \) . The period 3 elliptic points of \( {\Gamma }_{0}\left( N\right) \) are in bijective correspondence with the ideals \( J \) of \( \mathbb{Z}\left\lbrack {\mu }_{6}\right\rbrack \) (where \( {\mu }_{6} = {e}^{{2\pi i}/6} \) ) such that \( \mathbb{Z}\left\lbrack {\mu }_{6}\right\rbrack /J \cong \mathbb{Z}/N\mathbb{Z} \) .
Counting the ideals gives
Corollary 3.7.2. The number of elliptic points for \( {\Gamma }_{0}\left( N\right) \) is
\[
{\varepsilon }_{2}\left( {{\Gamma }_{0}\left( N\right) }\right) = \left\{ \begin{array}{ll} \mathop{\prod }\limits_{{p \mid N}}\left( {1 + \left( \frac{-1}{p}\right) }\right) & \text{ if }4 \nmid N, \\ 0 & \text{ if }4 \mid N, \end{array}\right.
\]
where \( \left( {-1/p}\right) \) is \( \pm 1 \) if \( p \equiv \pm 1\left( {\;\operatorname{mod}\;4}\right) \) and is 0 if \( p = 2 \), and
\[
{\varepsilon }_{3}\left( {{\Gamma }_{0}\left( N\right) }\right) = \left\{ \begin{array}{ll} \mathop{\prod }\limits_{{p \mid N}}\left( {1 + \left( \frac{-3}{p}\right) }\right) & \text{ if }9 \nmid N, \\ 0 & \text{ if }9 \mid N, \end{array}\right.
\]
where \( \left( {-3/p}\right) \) is \( \pm 1 \) if \( p \equiv \pm 1\left( {\;\operatorname{mod}\;3}\right) \) and is 0 if \( p = 3 \) .
These formulas extend Exercise 2.3.7(c) and Exercise 3.1.4(b, c).
Proof. This is an application of beginning algebraic number theory; see for example Chapter 9 of [IR92] for the results to quote. For period 3, the ring \( A = \mathbb{Z}\left\lbrack {\mu }_{6}\right\rbrack \) is a principal ideal domain and its maximal ideals are
- for each prime \( p \equiv 1\left( {\;\operatorname{mod}\;3}\right) \), two ideals \( {J}_{p} = \left\langle {a + b{\mu }_{6}}\right\rangle \) and \( {\bar{J}}_{p} = \langle a + \) \( \left. {b{\bar{\mu }}_{6}}\right\rangle \) such that \( \langle p\rangle = {J}_{p}{\bar{J}}_{p} \) and the quotients \( A/{J}_{p}^{e} \) and \( A/{\bar{J}}_{p}^{e} \) are group-isomorphic to \( \mathbb{Z}/{p}^{e}\mathbb{Z} \) for all \( e \in \mathbb{N} \) ,
- for each prime \( p \equiv - 1\left( {\;\operatorname{mod}\;3}\right) \), the ideal \( {J}_{p} = \langle p\rangle \) such that the quotient \( A/{J}_{p}^{e} \) is group-isomorphic to \( {\left( \mathbb{Z}/{p}^{e}\mathbb{Z}\right) }^{2} \) for all \( e \in \mathbb{N} \) ,
- for \( p = 3 \), the ideal \( {J}_{3} = \left\langle {1 + {\mu }_{6}}\right\rangle \) such that \( \langle 3\rangle = {J}_{3}^{2} \) and the quotient \( A/{J}_{3}^{e} \) is group-isomorphic to \( {\left( \mathbb{Z}/{3}^{e/2}\mathbb{Z}\right) }^{2} \) for even \( e \in \mathbb{N} \) and is group-isomorphic to \( \mathbb{Z}/{3}^{\left( {e + 1}\right) /2}\mathbb{Z} \oplus \mathbb{Z}/{3}^{\left( {e - 1}\right) /2}\mathbb{Z} \) for odd \( e \in \mathbb{N} \) .
The formula for \( {\varepsilon }_{3}\left( {{\Gamma }_{0}\left( N\right) }\right) \) now follows from Proposition 3.7.1 and the Chinese Remainder Theorem. Counting the period 2 elliptic points is left as Exercise 3.7.5(b), similarly citing the theory of the ring \( A = \mathbb{Z}\left\lbrack i\right\rbrack \) .
The elliptic points of \( {\Gamma }_{0}\left( N\right) \) can be written down easily now that they are counted. Consider the set of translates in \( \mathcal{H} \)
\[
\left\{ {\left\lbrack \begin{array}{ll} 1 & 0 \\ n & 1 \end{array}\right\rbrack \left( {\mu }_{3}\right) : 0 \leq n < N}\right\} .
\]
The corresponding isotropy subgroup generators \( \left\lbrack \begin{array}{ll} 1 & 0 \\ n & 1 \end{array}\right\rbrack \left\lbrack \begin{array}{rr} 0 & - 1 \\ 1 & 1 \end{array}\right\rbrack \left\lbrack \begin{array}{rr} 1 & 0 \\ - n & 1 \end{array}\right\rbrack \) are
\[
\left\{ {\left\lbrack \begin{matrix} n & - 1 \\ {n}^{2} - n + 1 & 1 - n \end{matrix}\right\rbrack : 0 \leq n < N}\right\} .
\]
The number of these that are elements of \( {\Gamma }_{0}\left( N\right) \) is the number of solutions to the congruence \( {x}^{2} - x + 1 \equiv 0\left( {\;\operatorname{mod}\;N}\right) \), and this number is given by the formula for \( {\varepsilon }_{3}\left( {{\Gamma }_{0}\left( N\right) }\right) \) in Corollary 3.7.2 (Exercise 3.7.6(a)). The cosets \( \left\{ {{\Gamma }_{0}\left( N\right) \left\lbrack \begin{array}{ll} 1 & 0 \\ n & 1 \end{array}\right\rbrack : 0 \leq n < N}\right\} \) are distinct in the quotient space \( {\Gamma }_{0}\left( N\right) \smallsetminus {\mathrm{{SL}}}_{2}\left( \mathbb{Z}\right) \) , though they do not constitute the entire quotient space, and the corresponding
orbits \( {\Gamma }_{0}\left( N\right) \left\lbrack \begin{array}{ll} 1 & 0 \\ n & 1 \end{array}\right\rbrack \left( {\mu }_{3}\right) \) for such \( n \) such that \( {n}^{2} - n + 1 \equiv 0\left( {\;\operatorname{mod}\;N}\right) \) are distinct in \( {X}_{0}\left( N\right) \) (Exercise 3.7.6(b)). Thus we have found all the period 3 elliptic points of \( {\Gamma }_{0}\left( N\right) \) (Exercise 3.7.6(c)),
\[
{\Gamma }_{0}\left( N\right) \frac{n + {\mu }_{3}}{{n}^{2} - n + 1},\;{n}^{2} - n + 1 \equiv 0\left( {\;\operatorname{mod}\;N}\right) .
\]
(3.14)
Similarly, the period 2 elliptic points are (Exercise 3.7.6(d))
\[
{\Gamma }_{0}\left( N\right) \frac{n + i}{{n}^{2} + 1},\;{n}^{2} + 1 \equiv 0\left( {\;\operatorname{mod}\;N}\right) .
\]
(3.15)
## Exercises
3.7.1. (a) Show that if \( \gamma \) generates a nontrivial isotropy subgroup in \( {\mathrm{{SL}}}_{2}\left( \mathbb{Z}\right) \) then \( \gamma \) and \( {\gamma }^{-1} \) are not conjugate in \( {\mathrm{{GL}}}_{2}^{ + }\left( \mathbb{Q}\right) \) . (Hints for this exercise are at the end of the book.)
(b) Show that any two conjugacy classes in a group are either equal or disjoint.
(c) Show that the \( {\Gamma }_{0}^{ \pm }\left( N\right) \) -conjugacy class of \( \gamma \in {\Gamma }_{0}\left( N\right) \) is the union of the \( {\Gamma }_{0}\left( N\right) \) -conjugacy classes of \( \gamma \) and \( \left\lbrack \begin{matrix} 1 & 0 \\ 0 & - 1 \end{matrix}\right\rbrack \gamma \left\lbrack \begin{matrix} 1 & 0 \\ 0 & - 1 \end{matrix}\right\rbrack \) . Show that if \( \gamma \) has order 4 or 6 then this union is disjoint.
(d) Let \( \gamma = \left\lbrack \begin{array}{ll} 1 & 1 \\ 1 & 2 \end{array}\right\rbrack \left\lbrack \begin{array}{rr} 0 & - 1 \\ 1 & 0 \end{array}\right\rbrack {\left\lbrack \begin{array}{ll} 1 & 1 \\ 1 & 2 \end{array}\right\rbrack }^{-1} = \left\lbrack \begin{array}{ll} 3 & - 2 \\ 5 & - 3 \end{array}\right\rbrack \), an order-4 element of \( {\Gamma }_{0}\left( 5\right) \) . Show that \( \gamma \) is not conjugate to its inverse in \( {\Gamma }_{0}^{ \pm }\left( 5\right) \) .
3.7.2. In the context of mapping a matrix conjugacy class to an ideal, show that \( {L}_{0}\left( N\right) \) is an \( A \) -submodule of \( L \) .
3.7.3. (a) In the context of mapping an ideal \( J \) of \( A \) such that \( A/J \cong \mathbb{Z}/N\mathbb{Z} \) back to a matrix conjugacy class, retain the notation \( \left( {u, v}\right) \) for a \( \mathbb{Z} \) -basis of \( A \) such that \( \left( {u,{Nv}}\right) \) is a \( \mathbb{Z} \) -basis of \( J \) . Show that the matrix \( {\gamma }_{J} \in {\mathrm{M}}_{2}\left( \mathbb{Z}\right) \) such that \( {\mu }_{6}\left( {u, v}\right) = \left( {u, v}\right) {\gamma }_{J} \) lies in \( {\mathrm{{SL}}}_{2}\left( \mathbb{Z}\right) \) .
(b) For any \( \alpha \in {\Gamma }_{0}^{ \pm }\left( N\right) \) consider the \( \mathbb{Z} \) -basis \( \left( {{u}^{\prime },{v}^{\prime }}\right) = \left( {u, v}\right) \alpha \) of \( A \) . Show that \( \left( {{u}^{\prime }, N{v}^{\prime }}\right) \) is again a \( \mathbb{Z} \) -basis of \( J \) .
(c) Show that any two such \( \mathbb{Z} \) -bases \( \left( {u, v}\right) \) and \( \left( {{u}^{\prime },{v}^{\prime }}\right) \) of \( A \) satisfy the relation \( \left( {{u}^{\prime },{v}^{\prime }}\right) = \left( {u, v}\right) \alpha \) for some \( \alpha \in {\Gamma }_{0}^{ \pm }\left( N\right) \) .
3.7.4. In the context of checking that the maps between conjugacy classes and ideals invert each other, let \( \gamma \in {\Gamma }_{0}\left( N\right) \) of order 6 be given and define \( \left( {u, v}\right) = \left( {1,{\mu }_{6}}\right) m \) as in the section. Show that \( u{ \odot }_{\gamma }L \subset {L}_{0}\left( N\right) \), so that \( u \) annihilates \( L/{L}_{0}\left( N\right) \) . (A hint for this exercise is at the end of the book.)
3.7.5. (a) Similarly to the methods of the section, check that the first half of Proposition 3.7.1 holds.
(b) Prove the first half of Corollary 3.7.2. (A hint for this exercise is at the end of the book.)
3.7.6. (a) Show that the number of solutions to the congruence \( {x}^{2} - x + 1 \equiv \) 0 (mod \( N \) ) is given by the formula for \( {\varepsilon }_{3}\left( {{\Gamma }_{0}\left( N\right) }\right) \) in Corollary 3.7.2. (A hint for this exercise is at the end of the book.)
(b) Show that the orbits \( {\Gamma }_{0}\left( N\right) \left\lbrack \begin{array}{ll} 1 & 0 \\ n & 1 \end{array}\right\rbrack \left( {\mu }_{3}\right) \) for \( n = 0,\ldots, N - 1 \) such that \( {n}^{2} - n + 1 \equiv 0\left( {\;\operatorname{mod}\;N}\right) \) are distinct in \( {X}_{0}\left( N\right) \) .
(c) Confirm formula (3.14).
(d) Similarly show that the period 2 elliptic points of \( {\Gamma }_{0}\left( N\right) \) are given by formula (3.15).
3.7.7. Let \( {p}^{e} \) and \( M \) be positive integers with \( p \) prime, \( e \geq 1 \), and \( p \nmid M \) . Let \( m = {p}^{-1}\left( {\;\operatorname{mod}\;M}\right) \), i.e., \( {mp} \equiv 1\left( {\;\operatorname{mod}\;M}\right) \) and \( 0 \leq m < M \) . Consider the
matrices
\[
{\alpha }_{j} = \left\lbrack \begin{matrix} 1 & 0 \\ {Mj} & 1 \end{matrix}\right\rbrack ,\;0 \leq j < {p}^{e}
\]
and
\[
{\beta }_{j
|
Proposition 3.7.1. The period 2 elliptic points of \( {\Gamma }_{0}\left( N\right) \) are in bijective correspondence with the ideals \( J \) of \( \mathbb{Z}\left\lbrack i\right\rbrack \) such that \( \mathbb{Z}\left\lbrack i\right\rbrack /J \cong \mathbb{Z}/N\mathbb{Z} \) . The period 3 elliptic points of \( {\Gamma }_{0}\left( N\right) \) are in bijective correspondence with the ideals \( J \) of \( \mathbb{Z}\left\lbrack {\mu }_{6}\right\rbrack \) (where \( {\mu }_{6} = {e}^{{2\pi i}/6} \) ) such that \( \mathbb{Z}\left\lbrack {\mu }_{6}\right\rbrack /J \cong \mathbb{Z}/N\mathbb{Z} \) .
|
This is an application of beginning algebraic number theory; see for example Chapter 9 of [IR92] for the results to quote. For period 3, the ring \( A = \mathbb{Z}\left\lbrack {\mu }_{6}\right\rbrack \) is a principal ideal domain and its maximal ideals are:
- for each prime \( p \equiv 1\left( {\;\operatorname{mod}\;3}\right) \), two ideals \( {J}_{p} = \left\langle {a + b{\mu }_{6}}\right\rangle \) and \( {\bar{J}}_{p} = \langle a + \) \( \left. {b{\bar{\mu }}_{6}}\right\rangle \) such that \( \langle p\rangle = {J}_{p}{\bar{J}}_{p} \) and the quotients \( A/{J}_{p}^{e} \) and \( A/{\bar{J}}_{p}^{e} \) are group-isomorphic to \( \mathbb{Z}/{p}^{e}\mathbb{Z} \) for all \( e \in \mathbb{N} \),
- for each prime \( p \equiv - 1\left( {\;\operatorname{mod}\;3}\right) \), the ideal \( {J}_{p} = \langle p\rangle \) such that the quotient \( A/{J}_{p}^{e} \) is group-isomorphic to \( {\left( \mathbb{Z}/{p}^{e}\mathbb{Z}\right) }^{2} \) for all \( e \in \mathbb{N} \),
- for \( p = 3 \), the ideal \( {J}_{3} = \left\langle {1 + {\mu }_{6}}\right\rangle \) such that \( \langle 3\rangle = {J}_{3}^{2} \) and the quotient \( A/{J}_{3}^{e} \) is group-isomorphic to \( {\left( \mathbb{Z}/{3}^{e/2}\mathbb{Z}\right) }^{2} \) for even \( e \in \mathbb{N} \) and is group-isomorphic to \( \mathbb{Z}/{3}^{\left( {e + 1}\right) /2}\mathbb{Z} \oplus \mathbb{Z}/{3}^{\left( {e - 1}\right) /2}\mathbb{Z} \) for odd \( e \in \mathbb{N} \).
The formula for \( {\varepsilon }_{3}\left( {{\Gamma }_{0}\left( N\right) }\right) \) now follows from Proposition 3.7.1 and the Chinese Remainder Theorem. Counting the period 2 elliptic points is left as Exercise 3.7.5(b), similarly citing the theory of the ring \( A = \mathbb{Z}\left\lbrack i\right\rbrack \) .
|
Theorem 13.31. \( {\mathcal{X}}_{\infty } \sim {\Lambda }^{{r}_{2}} \oplus \) ( \( \Lambda \) -torsion).
One advantage of using \( {\mathcal{X}}_{\infty } \) rather than \( X \) is that it is easier to describe how \( {M}_{\infty } \) is generated. Since all \( p \) -power roots of unity are in \( {K}_{\infty },{M}_{\infty }/{K}_{\infty } \) is a Kummer extension. There is a subgroup
\[
V \subseteq {K}_{\infty }^{ \times }{ \otimes }_{\mathbb{Z}}{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}
\]
\[
V = \left\{ {a \otimes {p}^{-n} \mid \text{ various }n \geq 0\text{ and }a \in {K}_{\infty }^{ \times }}\right\}
\]
(it is not hard to see that all elements of \( {K}_{\infty }^{ \times } \otimes {\mathbb{Q}}_{p}/{\mathbb{Z}}_{p} \) are of the form \( a \otimes {p}^{-n} \) ) such that
\[
{M}_{\infty } = {K}_{\infty }\left( \left\{ {a}^{1/{p}^{n}}\right\} \right)
\]
There is a Kummer pairing
\[
{\mathcal{X}}_{\infty } \times V \rightarrow {W}_{{p}^{\infty }} = p\text{-power roots of unity,}
\]
just as in Chapter 10. In particular,
\[
\left( {{\sigma x},{\sigma v}}\right) = {\left( x, v\right) }^{\sigma },\;\sigma \in \operatorname{Gal}\left( {{K}_{\infty }/F}\right) .
\]
Let \( {I}_{m} \) be the group of fractional ideals of \( {K}_{m} \) and let \( {I}_{\infty } = \bigcup {I}_{m} \) . Since \( a \otimes {p}^{-n} \) gives an extension unramified outside \( p \), and since \( a \in {K}_{m} \) for some \( m \), it follows that
\[
\left( a\right) = {B}_{1}^{{p}^{n}} \cdot {B}_{2}\;\text{ in some }{I}_{m},
\]
where \( {B}_{1} \in {I}_{m} \) and \( {B}_{2} \) is a product of primes above \( p \) . Since all primes above \( p \) are infinitely ramified in a cyclotomic \( {\mathbb{Z}}_{p} \) -extension, \( {B}_{2} \) is a \( {p}^{n} \) th power in \( {I}_{\infty } \) . Hence we may assume
\[
\left( a\right) = {B}_{1}^{{p}^{n}}\text{.}
\]
We obtain a map
\[
V \rightarrow {A}_{\infty } = \mathop{\lim }\limits_{ \rightarrow }{A}_{n}
\]
\[
a \otimes {p}^{-n} \mapsto \text{ class of }{B}_{1}.
\]
It is not hard to see that this map is well-defined, i.e., independent of \( m \) and the representation \( a \otimes {p}^{-n} \) . It is also surjective, since \( A \in {A}_{\infty } \Rightarrow {A}^{{p}^{n}} = 1 \) for some \( n \) (see Exercise 9.1). As in Chapter 10, the kernel is contained in
\[
{E}_{\infty }{ \otimes }_{\mathbb{Z}}{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}
\]
where \( {E}_{\infty } = \bigcup E\left( {K}_{n}\right) \) . Since we are allowing ramification above \( p \) ,
\[
{E}_{\infty }{ \otimes }_{\mathbb{Z}}{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p} \subseteq V
\]
so it follows that this gives the kernel (cf. Theorem 10.13, where the situation is essentially the same). We now have an exact sequence
\[
1 \rightarrow {E}_{\infty }{ \otimes }_{\mathbb{Z}}{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p} \rightarrow V \rightarrow {A}_{\infty } \rightarrow 1.
\]
Let \( \Delta = \operatorname{Gal}\left( {{K}_{0}/F}\right) \), which is a subgroup of \( {\left( \mathbb{Z}/p\mathbb{Z}\right) }^{ \times } = \operatorname{Gal}\left( {\mathbb{Q}\left( {\zeta }_{p}\right) /\mathbb{Q}}\right) \) . For \( i \in \mathbb{Z},{\omega }^{i} \) is a character of \( \Delta \left( {i \equiv j{\;\operatorname{mod}\;\left| \Delta \right| } \Leftrightarrow {\omega }^{i} = {\omega }^{j}\text{on}\Delta }\right) \) . Let
\[
{\varepsilon }_{i} = \frac{1}{\left| \Delta \right| }\mathop{\sum }\limits_{{\delta \in \Delta }}{\omega }^{-i}\left( \delta \right) \delta
\]
Everything decomposes via these idempotents. \( {W}_{{p}^{\infty }} \) is in the \( {\varepsilon }_{1} \) component. If \( i \) is odd, then \( {\varepsilon }_{i}\left( {{E}_{\infty } \otimes {\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}}\right) = 0 \), since \( \left\lbrack {E : W{E}^{ + }}\right\rbrack = 1 \) or 2 for each \( {K}_{n} \) and \( {W}_{{p}^{\infty }} \otimes {\mathbb{Q}}_{p}/{\mathbb{Z}}_{p} = 0 \) . We obtain
\[
{\varepsilon }_{i}V \simeq {\varepsilon }_{i}{A}_{\infty },\;i\text{ odd. }
\]
Note that by Proposition 13.26, \( {\varepsilon }_{i}{A}_{\infty } = \bigcup {\varepsilon }_{i}{A}_{n} \) . As in Chapter 10,
\[
{\varepsilon }_{j}{\mathcal{X}}_{\infty } \times {\varepsilon }_{i}V \rightarrow {W}_{{p}^{\infty }}
\]
is nondegenerate, hence
\[
{\varepsilon }_{j}{\mathcal{X}}_{\infty } \times {\varepsilon }_{i}{A}_{\infty } \rightarrow {W}_{{p}^{\infty }},\;i + j \equiv 1{\;\operatorname{mod}\;\left| \Delta \right| }, i\text{ odd,}
\]
is nondegenerate. Therefore
\[
{\varepsilon }_{j}{\mathcal{X}}_{\infty } \simeq {\operatorname{Hom}}_{{\mathbf{Z}}_{p}}\left( {{\varepsilon }_{i}{A}_{\infty },{W}_{{p}^{\infty }}}\right)
\]
where \( \operatorname{Gal}\left( {{K}_{\infty }/F}\right) \) acts via \( \left( {\sigma f}\right) \left( a\right) = \sigma \left( {f\left( {{\sigma }^{-1}a}\right) }\right) \) (cf. Exercise 10.8).
This last equation is often written in another form. Let
\[
T = \mathop{\lim }\limits_{ \leftarrow }{W}_{{p}^{n + 1}}
\]
where the invese limit is taken with respect to the \( p \) th power map (which is the same as the norm map from \( \mathbb{Q}\left( {\zeta }_{{p}^{n + 1}}\right) \) to \( \mathbb{Q}\left( {\zeta }_{{p}^{n}}\right) \) ). Then
\[
T \simeq {\mathbb{Z}}_{p},\;\text{as abelian groups,}
\]
but the Galois group acts via
\[
{\sigma }_{a}\left( t\right) = {at}\;\text{ for }a \in \Delta \times \left( {1 + p{\mathbb{Z}}_{p}}\right) \subseteq {\mathbb{Z}}_{p}^{ \times },
\]
where we are writing \( T \) additively. Let
\[
{T}^{\left( -1\right) } = {\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {T,{\mathbb{Z}}_{p}}\right)
\]
with the Galois action on Hom as above. Then
\[
{T}^{\left( -1\right) } \simeq {\mathbb{Z}}_{p},\;\text{as abelian groups.}
\]
If \( f \in {T}^{\left( -1\right) } \) and \( t \in T \) then, since \( {\sigma }_{a} \) acts trivially on \( {\mathbb{Z}}_{p} \) ,
\[
\left( {{\sigma }_{a}f}\right) \left( t\right) = {\sigma }_{a}\left( {f\left( {{\sigma }_{a}^{-1}t}\right) }\right) = f\left( {{a}^{-1}t}\right) = {a}^{-1}f\left( t\right) ,
\]
so
\[
{\sigma }_{a}f = {a}^{-1}f
\]
It follows that
\[
T{ \otimes }_{{\mathbb{Z}}_{p}}{T}^{\left( -1\right) } \simeq {\mathbb{Z}}_{p},\;\text{with trivial Galois action.}
\]
Define the "twist" \( {\varepsilon }_{j}{\mathcal{X}}_{\infty }\left( {-1}\right) \) by
\[
{\varepsilon }_{j}{\mathcal{X}}_{\infty }\left( {-1}\right) = {\varepsilon }_{j}{\mathcal{X}}_{\infty }{ \otimes }_{{\mathbb{Z}}_{p}}{T}^{\left( -1\right) }.
\]
This is the same as \( {\varepsilon }_{j}{\mathcal{X}}_{\infty } \) as a \( {\mathbb{Z}}_{p} \) -module but the Galois action has been changed:
\[
{\sigma }_{a}\left( {x \otimes f}\right) = {\sigma }_{a}\left( x\right) \otimes {a}^{-1}f = {a}^{-1}{\sigma }_{a}\left( x\right) \otimes f.
\]
Proposition 13.32. \( {\varepsilon }_{j}{\mathcal{X}}_{\infty }\left( {-1}\right) \simeq {\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {{\varepsilon }_{i}{A}_{\infty },{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}}\right) \) as \( \Lambda \) -modules, where \( i + j \equiv 1{\;\operatorname{mod}\;\left| \Delta \right| } \) and \( i \) is odd.
Proof. We shall show more generally that
\[
{\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {B,{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}}\right) \simeq {\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {B,{W}_{{p}^{\infty }}}\right) { \otimes }_{{\mathbb{Z}}_{p}}{T}^{\left( -1\right) }
\]
for any \( \Lambda \) -module \( B \) . There is an isomorphism of abelian groups
\[
{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}\overset{\phi }{ \rightarrow }{W}_{{p}^{\infty }}
\]
\[
\frac{a}{{p}^{n}} \mapsto {\zeta }_{{p}^{n}}^{a}
\]
Choose a generator \( {t}_{0} \) for \( {T}^{\left( -1\right) } \) as a \( {\mathbb{Z}}_{p} \) -module. If we ignore the Galois action, we obtain an isomorphism by mapping
\[
h \mapsto \left( {\phi h}\right) \otimes {t}_{0}
\]
for \( h \in {\operatorname{Hom}}_{{\mathbb{Z}}_{p}}\left( {B,{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}}\right) \) . Let \( \sigma = {\sigma }_{a} \in \Gamma \) . Then
\[
\left( {\sigma h}\right) \left( b\right) = \sigma \left( {h\left( {{\sigma }^{-1}b}\right) }\right) = h\left( {{\sigma }^{-1}b}\right) ,
\]
and
\[
\sigma \left( {{\phi h} \otimes {t}_{0}}\right) = {\sigma \phi h}{\sigma }^{-1} \otimes \sigma {t}_{0}
\]
\[
= {a\phi h}{\sigma }^{-1} \otimes {a}^{-1}{t}_{0}
\]
\[
= {\phi h}{\sigma }^{-1} \otimes {t}_{0}
\]
Therefore
\[
{\sigma h} \mapsto \sigma \left( {{\phi h} \otimes {t}_{0}}\right)
\]
under the above isomorphism, so the Galois actions are compatible. This completes the proof.
The proposition says that the discrete group \( {\varepsilon }_{i}{A}_{\infty } \) and the compact group \( {\varepsilon }_{j}{\mathcal{X}}_{\infty }\left( {-1}\right) \) are dual in the sense of Pontryagin.
## §13.6. The Main Conjecture
For simplicity, we assume \( p \neq 2 \) in this section. Consider the \( {\mathbb{Z}}_{p} \) -extension \( \mathbb{Q}\left( {\zeta }_{{p}^{\infty }}\right) /\mathbb{Q}\left( {\zeta }_{p}\right) \) . In Theorem 10.16 we showed that if Vandiver’s Conjecture holds for \( p \) then
\[
{\varepsilon }_{i}X \simeq \Lambda /\left( {f\left( {T,{\omega }^{1 - i}}\right) }\right)
\]
for \( i = 3,5,\ldots, p - 2 \), where
\[
f\left( {{\left( 1 + p\right) }^{s} - 1,{\omega }^{1 - i}}\right) = {L}_{p}\left( {s,{\omega }^{1 - i}}\right) .
\]
Factor \( f\left( {T,{\omega }^{1 - i}}\right) = {p}^{{\mu }_{i}}{g}_{i}\left( T\right) {U}_{i}\left( T\right) \) with \( {g}_{i} \) distinguished and \( {U}_{i} \in {\Lambda }^{ \times } \) . We know that \( {\mu }_{i} = 0 \) by Theorem 7.15. Therefore
\[
{\varepsilon }_{i}X \simeq \Lambda /\left( {{g}_{i}\left( T\right) }\right)
\]
which is in the form of Theorem 13.12. So in this case the distinguished polynomial in the decomposition of \( {\varepsilon }_{i}X \) is essentially the \( p \) -adic \( L \) -function. This is conjectured to happen more generally.
Let \( F \) be totally real and let \( {K}_{0} = F\left( {\zeta }_{p}\right) ,{K}_{\infty } = F\left( {\zeta }_{{p}^{\alpha }}\right) \)
|
Theorem 13.31. \( {\mathcal{X}}_{\infty } \sim {\Lambda }^{{r}_{2}} \oplus \) ( \( \Lambda \) -torsion).
|
One advantage of using \( {\mathcal{X}}_{\infty } \) rather than \( X \) is that it is easier to describe how \( {M}_{\infty } \) is generated. Since all \( p \) -power roots of unity are in \( {K}_{\infty },{M}_{\infty }/{K}_{\infty } \) is a Kummer extension. There is a subgroup
\[
V \subseteq {K}_{\infty }^{ \times }{ \otimes }_{\mathbb{Z}}{\mathbb{Q}}_{p}/{\mathbb{Z}}_{p}
\]
\[
V = \left\{ {a \otimes {p}^{-n} \mid \text{ various }n \geq 0\text{ and }a \in {K}_{\infty }^{ \times }}\right\}
\]
(it is not hard to see that all elements of \( {K}_{\infty }^{ \times } \otimes {\mathbb{Q}}_{p}/{\mathbb{Z}}_{p} \) are of the form \( a \otimes {p}^{-n} \) ) such that
\[
{M}_{\infty } = {K}_{\infty }\left( \left\{ {a}^{1/{p}^{n}}\right\} \right)
\]
There is a Kummer pairing
\[
{\mathcal{X}}_{\infty } \times V \rightarrow {W}_{{p}^{\infty }} = p\text{-power roots of unity,}
\]
just as in Chapter 10. In particular,
\[
\left( {{\sigma x},{\sigma v}}\right) = {\left( x, v\right) }^{\sigma },\;\sigma \in \operatorname{Gal}\left( {{K}_{\infty }/F}\right) .
\]
Let \( {I}_{m} \) be the group of fractional ideals of \( {K}_{m} \) and let \( {I}_{\infty } = \bigcup {I}_{m} \) . Since \( a \otimes {p}^{-n} \) gives an extension unramified outside \( p \), and since \( a \in {K}_{m} \) for some \( m \), it follows that
\[
\left( a\right) = {B}_{1}^{{p}^{n}} \cdot {B}_{2}\;\text{ in some }{I}_{m},
\]
where \( {B}_{1} \in {I}_{m} \) and \( {B}_{2} \) is a product of primes above \( p \) . Since all primes above \( p \) are infinitely ramified in a cyclotomic \( {\mathbb{Z}}_{p} \) -extension, \( {B}_{2} \) is a \( {p}^{n} \) th power in \( {I}_{\infty } \) . Hence we may assume
\[
\left( a\right) = {B}_{1}^{{p}^{n}}\text{.}
\]
We obtain a map
\[
|
Theorem 2.8. For any compact subset \( M \) of \( {\mathbb{R}}^{d} \), the convex hull conv \( M \) is again compact.
Proof. Let \( {\left( {y}_{v}\right) }_{v \in \mathbb{N}} \) be any sequence of points from conv \( M \) . We shall prove that the sequence admits a subsequence which converges to a point in conv \( M \) . Let the dimension of aff \( M \) be denoted by \( n \) . Then Corollary 2.5 shows that each \( {y}_{v} \) in the sequence has a representation
\[
{y}_{v} = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{vi}{x}_{vi}
\]
where \( {x}_{vi} \in M \) . We now consider the \( n + 1 \) sequences
\[
{\left( {x}_{v1}\right) }_{v \in \mathbb{N}},\ldots ,{\left( {x}_{v\left( {n + 1}\right) }\right) }_{v \in \mathbb{N}}
\]
(6)
of points from \( M \), and the \( n + 1 \) sequences
\[
{\left( {\lambda }_{v1}\right) }_{v \in \mathbb{N}},\ldots ,{\left( {\lambda }_{v\left( {n + 1}\right) }\right) }_{v \in \mathbb{N}}
\]
(7)
of real numbers from \( \left\lbrack {0,1}\right\rbrack \) . By the compactness of \( M \) there is a subsequence of \( {\left( {x}_{v1}\right) }_{v \in \mathbb{N}} \) which converges to a point in \( M \) . Replace all \( 2\left( {n + 1}\right) \) sequences by the corresponding subsequences. Change notation such that (6) and (7) now denote the subsequences; then \( {\left( {x}_{v1}\right) }_{v \in \mathbb{N}} \) converges in \( M \) . Next, use the compactness of \( M \) again to see that there is a subsequence of the (sub)sequence \( {\left( {x}_{v2}\right) }_{v \in \mathbb{N}} \) which converges to a point in \( M \) . Change notation, etc. Then after \( 2\left( {n + 1}\right) \) steps, where we use the compactness of \( M \) in step \( 1,\ldots, n + 1 \) , and the compactness of \( \left\lbrack {0,1}\right\rbrack \) in step \( n + 2,\ldots ,{2n} + 2 \), we end up with subsequences
\[
{\left( {x}_{{v}_{m}1}\right) }_{m \in \mathbb{N}},\ldots ,{\left( {x}_{{v}_{m}\left( {n + 1}\right) }\right) }_{m \in \mathbb{N}}
\]
of the original sequences (6) which converge in \( M \), say
\[
\mathop{\lim }\limits_{{m \rightarrow \infty }}{x}_{{v}_{m}i} = {x}_{0i},\;i = 1,\ldots, n + 1,
\]
and subsequences
\[
{\left( {\lambda }_{{v}_{m}1}\right) }_{m \in \mathbb{N}},\ldots ,{\left( {\lambda }_{{v}_{m}\left( {n + 1}\right) }\right) }_{m \in \mathbb{N}}
\]
of the original sequences (7) which converge in \( \left\lbrack {0,1}\right\rbrack \), say
\[
\mathop{\lim }\limits_{{m \rightarrow \infty }}{\lambda }_{{v}_{m}i} = {\lambda }_{0i},\;i = 1,\ldots, n + 1.
\]
Since
\[
\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{{v}_{m}i} = 1,\;m \in \mathbb{N}
\]
we also have
\[
\mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{0i} = 1
\]
Then the linear combination
\[
{y}_{0} \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{0i}{x}_{0i}
\]
is in fact a convex combination. Therefore, \( {y}_{0} \) is in conv \( M \) by Theorem 2.2. It is also clear that
\[
\mathop{\lim }\limits_{{m \rightarrow \infty }}{y}_{{v}_{m}} = {y}_{0}
\]
In conclusion, \( {\left( {y}_{{v}_{m}}\right) }_{m\; \in \;\mathbb{N}} \) is a subsequence of \( {\left( {y}_{v}\right) }_{v\; \in \;\mathbb{N}} \) which converges to a point in conv \( M \) .
Some readers may prefer the following version of the proof above. With \( n = \dim \left( {\operatorname{aff}M}\right) \) as above, let
\[
S \mathrel{\text{:=}} \left\{ {\left( {{\lambda }_{1},\ldots ,{\lambda }_{n + 1}}\right) \in {\mathbb{R}}^{n + 1} \mid {\lambda }_{1},\ldots ,{\lambda }_{n + 1} \geq 0,{\lambda }_{1} + \cdots + {\lambda }_{n + 1} = 1}\right\} ,
\]
and define a mapping \( \varphi : {M}^{n + 1} \times S \rightarrow {\mathbb{R}}^{d} \) by
\[
\varphi \left( {\left( {{x}_{1},\ldots ,{x}_{n + 1}}\right) ,\left( {{\lambda }_{1},\ldots ,{\lambda }_{n + 1}}\right) }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{i}{x}_{i}.
\]
By Corollary 2.5, the set \( \varphi \left( {{M}^{n + 1} \times S}\right) \) is precisely conv \( M \) . Now, \( {M}^{n + 1} \times S \) is compact by the compactness of \( M \) and \( S \), and \( \varphi \) is continuous. Since the continuous image of a compact set is again compact, it follows that conv \( M \) is compact.
Since any finite set is compact, Theorem 2.8 immediately implies:
Corollary 2.9. Any convex polytope \( P \) in \( {\mathbb{R}}^{d} \) is a compact set.
One should observe, however, that a direct proof of Corollary 2.9 does not require Carathéodory’s Theorem. In fact, if \( M \) is the finite set \( \left\{ {{x}_{1},\ldots ,{x}_{m}}\right\} \) , then each \( {y}_{v} \) (in the notation of the proof above) has a representation
\[
{y}_{v} = \mathop{\sum }\limits_{{i = 1}}^{m}{\lambda }_{vi}{x}_{i}
\]
Then we have a similar situation as in the proof above (with \( m \) corresponding to \( n + 1 \) ), except that now the sequences corresponding to the sequences (6) are constant, \( {x}_{vi} = {x}_{i} \) for all \( v \) . Therefore, we need only show here that the sequences (7) admit converging subsequences (which is proved as above).
## EXERCISES
2.1. Show that when \( {C}_{1} \) and \( {C}_{2} \) are convex sets in \( {\mathbb{R}}^{d} \), then the set
\[
{C}_{1} + {C}_{2} \mathrel{\text{:=}} \left\{ {{x}_{1} + {x}_{2} \mid {x}_{1} \in {C}_{1},{x}_{2} \in {C}_{2}}\right\}
\]
is also convex.
2.2. Show that when \( C \) is a convex set in \( {\mathbb{R}}^{d} \), and \( \lambda \) is a real, then the set
\[
{\lambda C} \mathrel{\text{:=}} \{ {\lambda x} \mid x \in C\}
\]
is also convex.
2.3. Show that when \( C \) is a convex set in \( {\mathbb{R}}^{d} \), and \( \varphi : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{e} \) is an affine mapping, then \( \varphi \left( C\right) \) is also convex.
2.4. Show that \( \operatorname{conv}\left( {{M}_{1} + {M}_{2}}\right) = \operatorname{conv}{M}_{1} + \operatorname{conv}{M}_{2} \) for any subsets \( {M}_{1} \) and \( {M}_{2} \) of \( {\mathbb{R}}^{d} \) .
2.5. Show that when \( M \) is any subset of \( {\mathbb{R}}^{d} \), and \( \varphi : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{e} \) is an affine mapping, then \( \varphi \left( {\operatorname{conv}M}\right) = \operatorname{conv}\varphi \left( M\right) \) . Deduce in particular that the affine image of a polytope is again a polytope.
2.6. Show that when \( M \) is an open subset of \( {\mathbb{R}}^{d} \), then conv \( M \) is also open. Use this fact to show that the interior of a convex set is again convex. (Cf. Theorem 3.4(b).)
2.7. Show by an example in \( {\mathbb{R}}^{2} \) that the convex hull of a closed set need not be closed. (Cf. Theorem 2.8.)
2.8. An \( n \) -family \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) of points from \( {\mathbb{R}}^{d} \) is said to be convexly independent if no \( {x}_{i} \) in the family is a convex combination of the remaining \( {x}_{j} \) ’s. For \( n \geq d + 2 \), show that if every \( \left( {d + 2}\right) \) -subfamily of \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is convexly independent, then the entire \( n \) -family is convexly independent.
2.9. Let \( {\left( {C}_{i}\right) }_{i \in I} \) be a family of convex sets in \( {\mathbb{R}}^{d} \) with \( d + 1 \leq \operatorname{card}I \) . Consider the following two statements:
(a) Any \( d + 1 \) of the sets \( {C}_{i} \) have a non-empty intersection.
(b) All the sets \( {C}_{i} \) have a non-empty intersection.
Prove Helly’s Theorem: If card \( I < \infty \), then (a) \( \Rightarrow \) (b). (Hint: Use induction on \( n \mathrel{\text{:=}} \) card \( I \) . Apply Corollary 2.7.)
Show by an example that we need not have (a) \( \Rightarrow \) (b) when card \( I = \infty \) .
Prove that if each \( {C}_{i} \) is closed, and at least one is compact, then we have (a) \( \Rightarrow \) (b) without restriction on card \( I \) .
2.10. Let a point \( x \) in \( {\mathbb{R}}^{d} \) be a convex combination of points \( {x}_{1},\ldots ,{x}_{n} \), and let each \( {x}_{i} \) be a convex combination of points \( {y}_{i1},\ldots ,{y}_{i{n}_{i}} \) . Show that \( x \) is a convex combination of the points \( {y}_{i{v}_{i}}, i = 1,\ldots, n,{v}_{i} = 1,\ldots ,{n}_{i} \) .
2.11. Let \( {\left( {C}_{i}\right) }_{i \in I} \) be a family of distinct convex sets in \( {\mathbb{R}}^{d} \) . Show that
\[
\operatorname{conv}\mathop{\bigcup }\limits_{{i \in I}}{C}_{i}
\]
is the set of all convex combinations
\[
\mathop{\sum }\limits_{{v = 1}}^{n}{\lambda }_{{i}_{v}}{x}_{{i}_{v}}
\]
where \( {x}_{{i}_{v}} \in {C}_{{i}_{v}} \) .
Deduce in particular that when \( {C}_{1} \) and \( {C}_{2} \) are convex, then \( \operatorname{conv}\left( {{C}_{1} \cup {C}_{2}}\right) \) is the union of all segments \( \left\lbrack {{x}_{1},{x}_{2}}\right\rbrack \) with \( {x}_{1} \in {C}_{1} \) and \( {x}_{2} \in {C}_{2} \) .
## §3. The Relative Interior of a Convex Set
It is clear that the interior of a convex set may be empty. A triangle in \( {\mathbb{R}}^{3} \) , for example, has no interior points. However, it does have interior points in the 2-dimensional affine space that it spans. This observation illustrates the definition below of the relative interior of a convex set, and the main result of this section, Theorem 3.1. We shall also discuss the behaviour of a convex set under the operations of forming (relative) interior, closure, and boundary.
By the relative interior of a convex set \( C \) in \( {\mathbb{R}}^{d} \) we mean the interior of \( C \) in the affine hull aff \( C \) of \( C \) . The relative interior of \( C \) is denoted by \( \operatorname{ri}C \) . Points in \( \mathrm{{ri}}C \) are called relative interior points of \( C \) . The set \( \mathrm{{cl}}C \smallsetminus \mathrm{{ri}}C \) is called the relative boundary of \( C \), and is denoted by \( \mathrm{{rb}}C \) . Points in \( \mathrm{{rb}}C \) are called relative boundary points of \( C \) . (Since aff \( C \) is a closed subset of \( {\mathbb{R}}^{d} \), the "relative closure" of \( C \)
|
Theorem 2.8. For any compact subset \( M \) of \( {\mathbb{R}}^{d} \), the convex hull conv \( M \) is again compact.
|
Let \( {\left( {y}_{v}\right) }_{v \in \mathbb{N}} \) be any sequence of points from conv \( M \) . We shall prove that the sequence admits a subsequence which converges to a point in conv \( M \) . Let the dimension of aff \( M \) be denoted by \( n \) . Then Corollary 2.5 shows that each \( {y}_{v} \) in the sequence has a representation
\[
{y}_{v} = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}{\lambda }_{vi}{x}_{vi}
\]
where \( {x}_{vi} \in M \) . We now consider the \( n + 1 \) sequences
\[
{\left( {x}_{v1}\right) }_{v \in \mathbb{N}},\ldots ,{\left( {x}_{v\left( {n + 1}\right) }\right) }_{v \in \mathbb{N}}
\]
of points from \( M \), and the \( n + 1 \) sequences
\[
{\left( {\lambda }_{v1}\right) }_{v \in \mathbb{N}},\ldots ,{\left( {\lambda }_{v\left( {n + 1}\right) }\right) }_{v \in \mathbb{N}}
\]
of real numbers from \( \left\lbrack {0,1}\right\rbrack \) . By the compactness of \( M \) there is a subsequence of \( {\left( {x}_{v1}\right) }_{v \in \mathbb{N}} \) which converges to a point in \( M \) . Replace all \( 2\left( {n + 1}\right) \) sequences by the corresponding subsequences. Change notation such that (6) and (7) now denote the subsequences; then \( {\left( {x}_{v1}\right) }_{v \in \mathbb{N}} \) converges in \( M \) . Next, use the compactness of \( M \) again to see that there is a subsequence of the (sub)sequence \( {\left( {x}_{v2}\right) }_{v \in \mathbb{N}} \) which converges to a point in \( M \) . Change notation, etc. Then after \( 2\left( {n + 1}\right) \) steps, where we use the compactness of \( M \) in step \( 1,\ldots, n + 1 \) , and the compactness of \( \left\lbrack {0,1}\right\rbrack \) in step \( n + 2,\ldots ,{2n} + 2 \), we end up with subsequences
\[
{\left( {x}_{{v}_{m}1}\right) }_{m \in \mathbb{N}},\ldots ,{\left( {x}_{{v}_{m}\left( {n + 1}\right) }\right) }_{m \in \mathbb{N}}
\]
of the original sequences (6) which converge in \( M \), say
\[
\mathop{\lim }\limits_{{m \rightarrow \infty }}{x}_{{v}_{m}i} = {x}_{0i},\;i = 1,\ldots, n + 1,
\]
and subsequences
\[
{\left( {\lambda }_{{v}_{m}1}\right) }_{m \in \mathbb{N}},\ldots ,{\left( {\lambda }_{{v}_{m}\left( {n + 1}\right) }\right) }_{m \in \mathbb{N}}
\]
of the original sequences (7) which converge in \( \left\lbrack {0,1}\right\rbrack \), say
\[
\mathop{\lim }\limits_{{m \rightarrow \infty }}{\lambda }_{{v}_{m}i} = {\lambda }_{0i},\;i = 1,\ldots, n + 1.
|
Theorem 4 (Lindenstrauss-Pelczynski). Let \( \left( {x}_{n}\right) \) be a normalized unconditional basis of \( {c}_{0} \) . Then \( \left( {x}_{n}\right) \) is equivalent to the unit vector basis.
What spaces other than \( {c}_{0} \) and \( {l}_{1} \) have unique unconditional bases? Here is one: \( {l}_{2} \) . In fact, if \( {x}_{1},\ldots ,{x}_{n} \in {l}_{2} \), then it is an easy consequence of the parallelogram law to show that given \( {y}_{1},\ldots ,{y}_{n} \in {l}_{2} \) ,
\[
\mathop{\sum }\limits_{{{\left( {\theta }_{i}\right) }_{i = 1}^{n} \in \{ \pm 1{\} }^{n}}}{\begin{Vmatrix}\mathop{\sum }\limits_{{i = 1}}^{n}{\theta }_{i}{y}_{i}\end{Vmatrix}}^{2} = {2}^{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}{y}_{i}\end{Vmatrix}}^{2}.
\]
From this it follows easily that if \( \left( {x}_{n}\right) \) is a normalized unconditional basis for \( {l}_{2} \), then \( \mathop{\sum }\limits_{i}{a}_{i}{x}_{i} \in {l}_{2} \) if and only if \( \mathop{\sum }\limits_{i}{\left| {a}_{i}\right| }^{2} < \infty \) .
\( {c}_{0},{l}_{1} \), and \( {l}_{2} \) all have unique normalized unconditional bases. Any others? The startling answer is No! This result, due to Lindenstrauss and Zippin, is one of the real treasures in the theory of Banach spaces. It is only with the greatest reluctance that we do not pursue the proof of this result here.
## Exercises
1. \( {L}_{p}\left\lbrack {0,1}\right\rbrack \) is a \( {\mathcal{L}}_{p} \) -space. \( {L}_{p}\left\lbrack {0,1}\right\rbrack \) is a \( {\mathcal{L}}_{p,1 + \varepsilon } \) -space for every \( \varepsilon > 0 \) .
2. \( C\left( K\right) \) is a \( {\mathcal{L}}_{\infty } \) -space. If \( K \) is a compact Hausdorff space, then \( C\left( K\right) \) is a \( {\mathcal{L}}_{\infty ,1 + \varepsilon } \) -space for each \( \varepsilon > 0 \) . (Hint: You might find that partitions of unity serve as a substitute for measurable partitions of \( \left\lbrack {0,1}\right\rbrack \) .)
3. Lattice bounded operators into \( {L}_{2}\left\lbrack {0,1}\right\rbrack \) . Let \( T : X \rightarrow {L}_{2}\left\lbrack {0,1}\right\rbrack \) be a bounded linear operator. Suppose there is a \( g \in {L}_{2}\left\lbrack {0,1}\right\rbrack \) such that
\[
\left| {Tx}\right| \leq g\text{ almost everywhere }
\]
for each \( x \in {B}_{X} \) . Show that \( T \) is absolutely 2 -summing.
4. Hilbert-Schmidt operators on \( {L}_{2}\left\lbrack {0,1}\right\rbrack \) . Let \( T : {L}_{2}\left\lbrack {0,1}\right\rbrack \rightarrow {L}_{2}\left\lbrack {0,1}\right\rbrack \) be a bounded linear operator for which \( T\left( {{L}_{2}\left\lbrack {0,1}\right\rbrack }\right) \subseteq {L}_{\infty }\left\lbrack {0,1}\right\rbrack \) setwise. Then \( T \) is a Hilbert-Schmidt operator.
## Notes and Remarks
The importance of the Lindenstrauss-Pelczynski paper to the revival of Banach space theory cannot be exaggerated. On the one hand, the challenge of Grothendieck's visionary program was reissued and a call to arms among abstract analysts made; on the other hand, Lindenstrauss and Pelczynski provided leadership by crystalizing many notions, some perhaps only implicitly present in Grothendieck's writings, central to the development of a real structure theory. They solved long-standing problems. They added converts to the Banach space faith with enticing problems. Their work led to meaningful relationships with other important areas of mathematical endeavor.
No doubt the leading role in the Lindenstrauss-Pelczynski presentation was played by Grothendieck's inequality. They followed Grothendieck's original scheme of proof, an averaging argument pursued on the \( n \) -sphere of Euclidean space with rotation invariant Haar measure gauging size, though they did provide, as one might expect, a few more details than Grothendieck did.
Interestingly enough, many of the other proofs of Grothendieck's inequality have come about in applications of Banach space ideas to other areas of analysis.
B. Maurey (1973) proved a form of Grothendieck's inequality while looking for the general character of his now-famous factorization scheme. He borrowed some ideas from H. P. Rosenthal's work (1973) on subspaces of \( {L}_{p} \), improved on them and, with G. Pisier, molded them into the notions of type and cotype.
G. Pisier settled a problem of J. Ringrose in operator theory by proving the following stunning \( {C}^{ * } \) version of Grothendieck’s inequality.
Theorem. Let \( \mathcal{A} \) be a \( {C}^{ * } \) -algebra and \( E \) be a Banach space of cotype 2; suppose either \( \mathcal{A} \) or \( E \) satisfies the bounded approximation property. Then every operator from \( \mathcal{A} \) to \( E \) factors through a Hilbert space.
The result itself generalizes Grothendieck's inequality but more to the point, Pisier's proof suggested (to him) a different approach to the original inequality through the use of interpolation theory.
J. L. Krivine (1973) in studying Banach lattices proved the following lattice form of Grothendieck's inequality.
Theorem. Let \( X \) and \( Y \) be Banach lattices and \( T : X \rightarrow Y \) be a bounded linear operator. Then for any \( {x}_{1},\ldots ,{x}_{n} \in X \) we have
\[
\begin{Vmatrix}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| T{x}_{i}\right| }^{2}\right) }^{1/2}\end{Vmatrix} \leq {K}_{G}\parallel T\begin{Vmatrix}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{2}\right) }^{1/2}\end{Vmatrix},
\]
where \( {K}_{G} \) is the universal Grothendieck constant.
Of course, sense must be made of the square of a member of a Banach lattice, but this causes no difficulty for a Krivine; he made sense of it and derived the above inequality, thereby clearing the way for some remarkably sharp theorems in the finer structure theory of Banach lattices.
To cite but one such advance, we need to introduce the Orlicz property: a Banach space \( X \) has the Orlicz property if given an unconditionally convergent series \( {\sum }_{n}{x}_{n} \) in \( X \), then \( {\sum }_{n}{\begin{Vmatrix}{x}_{n}\end{Vmatrix}}^{2} < \infty \) . Orlicz showed that \( {L}_{p}\left\lbrack {0,1}\right\rbrack \) has the Orlicz property whenever \( 1 \leq p \leq 2 \) . As we mentioned, Orlicz’s proof can be easily adapted to show the somewhat stronger feature of the spaces \( {L}_{p}\left\lbrack {0,1}\right\rbrack \) for \( 1 \leq p \leq 2 \), namely, they have cotype 2 . With Krivine’s version of Grothendieck's inequality in hand, B. Maurey was able to establish the following improvement of a result of Dubinsky, Pelczynski, and Rosenthal (1972).
Theorem. If \( X \) is a Banach lattice, then \( X \) has cotype 2 if and only if \( X \) has the Orlicz property.
Generally, it is so that spaces having cotype 2 have the Orlicz property; however, it is not known if every Banach space with the Orlicz property has cotype 2.
A. Pelczynski and P. Wojtaszczyk were studying absolutely summing operators from the disk algebra to \( {l}_{2} \) when they discovered their proof of what is essentially Grothendieck's inequality. They observed that an old chestnut of R. E. A. C. Paley (1933) could, with some work, be reinterpreted as saying that there is an absolutely summing operator from the disk algebra onto \( {l}_{2} \) . Using this and the lifting property of \( {l}_{1} \), they were able to deduce that every operator from \( {l}_{1} \) to \( {l}_{2} \) is absolutely summing. Incidentally, they also noted that the existence of an absolutely summing operator from the disk algebra onto \( {l}_{2} \) serves as a point of distinction between the disk algebra and any space of continuous functions. Any absolutely summing operator from a \( {\mathcal{L}}_{\infty } \) -space to \( {l}_{2} \) is compact; so the existence of a quotient map from the disk algebra onto \( {l}_{2} \) implies that the disk algebra is not isomorphic as a Banach space to any \( C\left( K\right) \) -space.
It is of more than passing interest that the Pelczynski-Wojtaszczyk proof that the disk algebra is no: isomorphic to any \( {\mathcal{L}}_{\infty } \) -space had already been employed by S. V. Kisliakov, at least in spirit. Kisliakov (1976) showed that for \( n \geq 2 \) the spaces \( {C}^{k}\left( {I}^{n}\right) \) of \( k \) -times continuously differentiable functions on the \( n \) -cube are not \( {\mathcal{L}}_{\infty } \) -spaces by exhibiting operators from their duals to Hilbert space that fail to be absolutely 1 -summing.
As is usual in such matters, the precise determination of the best constant that works in Grothendieck's inequality has aroused considerable curiosity. Despite the optimistic hopes of a number of mathematicians, this constant appears on the surface to be unrelated to any of the old-time favorite constants; J. L. Krivine has provided a scheme that hints at the best value of the Grothendieck constant and probably sheds considerable light (for those who will see) on the exact nature of Grothendieck's inequality.
We have presented the results of the section entitled The Grothendieck-Lindenstrauss-Pelczynski Cycle much as Lindenstrauss and Pelczynski did without confronting some small technical difficulties which arise when one pursues the full strength of Theorem 4, namely, if \( I \leq p \leq 2 \), then every operator from a \( {\mathcal{L}}_{\infty } \) to a \( {\mathcal{L}}_{p} \) is absolutely 2 - summing.
The development of the structure of \( {\mathcal{L}}_{p} \) -spaces has been one of the crowning successes of Banach space theory following the Lindenstrauss-Pelczynski breakthrough. This is not the place for one to read of the many nuts cracked in the subject's development; rather, we preach patience while awaiting volume III of the Lindenstrauss-Tzafriri books wherein the complete story of the \( {\mathcal{L}}_{D} \) -spaces is to be told.
A Banach space \( Y \) is said to have the Grothendieck property if every operator from \( Y \) to \( {l}_{2} \) is absolutely 1 -summing. T
|
Theorem 4 (Lindenstrauss-Pelczynski). Let \( \left( {x}_{n}\right) \) be a normalized unconditional basis of \( {c}_{0} \) . Then \( \left( {x}_{n}\right) \) is equivalent to the unit vector basis.
|
null
|
Lemma 9.3.2. Let \( \left( {M, F}\right) \) be a Finsler manifold.
- Suppose that at some \( p \in M \), the exponential map \( {\exp }_{p} : {T}_{p}M \rightarrow M \) is a covering projection.
- Let \( {\sigma }_{0}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{T}_{0}}\right) \) and \( {\sigma }_{1}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{T}_{1}}\right) ,0 \leq t \leq L \) be any two (smooth) geodesics emanating from \( p \) and terminating at some common \( q \in M \) .
The following conclusions hold:
* If \( {\sigma }_{0} \) is homotopic to \( {\sigma }_{1} \) through a homotopy with fixed endpoints \( p \) and \( q \), then \( {T}_{0} = {T}_{1} \) (equivalently, \( {\sigma }_{0} = {\sigma }_{1} \) ).
* In particular, if \( {\sigma }_{0} \) and \( {\sigma }_{1} \) are not reparametrizations of each other, then they cannot be deformed to each other through a homotopy with fixed endpoints \( p \) and \( q \) .
Proof. The contrapositive of the first conclusion encompasses the second conclusion. So it suffices to establish the first one.
Suppose \( {\sigma }_{0} \) is homotopic to \( {\sigma }_{1} \), through a homotopy \( h\left( {t, u}\right) ,0 \leq t \leq L \) , \( 0 \leq u \leq 1 \) with fixed endpoints \( p \) and \( q \) . Using Theorem 9.3.1, we lift this \( h \) to a homotopy \( \widetilde{h} : \left\lbrack {0, L}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \rightarrow {T}_{p}M \) with \( \widetilde{h}\left( {t,0}\right) = t{T}_{0} \) .
By hypothesis, every \( t \) -curve of the homotopy \( h \) begins at \( p \) and ends at \( q \) . Theorem 9.3.1 assures us that correspondingly, every \( t \) -curve of the lifted homotopy \( \widetilde{h} \) begins at the origin of \( {T}_{p}M \) and ends at the tip of \( L{T}_{0} \in {T}_{p}M \) .
Note that both \( \widetilde{h}\left( {t,1}\right) \) and \( t{T}_{1} \) are lifts of \( {\sigma }_{1} \) which emanate from the origin of \( {T}_{p}M \) . So, by a corollary of Theorem 9.3.1, they must be the same. Consequently, \( \widetilde{h} \) is a homotopy between the rays \( t{T}_{0} \) and \( t{T}_{1} \), and all the intermediate \( t \) -curves share the same endpoints. However, the only way for the two rays \( t{T}_{0} \) and \( t{T}_{1},0 \leq t \leq L \), to have the same endpoints would be \( {T}_{0} = {T}_{1} \) . This is equivalent to saying that \( {\sigma }_{0} \) is actually identical to \( {\sigma }_{1} \) .
Let us now fulfill the goal stated at the beginning of this section.
Theorem 9.3.3. Let \( \left( {M, F}\right) \) be any forward geodesically complete, connected Finsler manifold of nonpositive flag curvature.
- Fix \( p, q \in M \) . Then, within each homotopy class of paths from \( p \) to \( q \), there exists a unique shortest smooth geodesic within that class.
- In particular, fix \( p \in M \) . Then, within every homotopy class of loops based at \( p \), there exists a unique shortest smooth closed geodesic within that class.
Proof. The existence has been ascertained in Theorem 8.7.1; it only requires forward geodesic completeness and connectedness.
We establish uniqueness here. Since \( \left( {M, F}\right) \) has, by hypothesis, nonpositive flag curvature, the exponential map \( {\exp }_{p} : {T}_{p}M \rightarrow M \) is a covering projection. This was what we found in \( §{9.2} \) .
Let \( \alpha \) be any homotopy class of paths from \( p \) to \( q \) . Suppose \( {\sigma }_{0}\left( t\right) \mathrel{\text{:=}} \) \( {\exp }_{p}\left( {t{T}_{0}}\right) \) and \( {\sigma }_{1}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{T}_{1}}\right) ,0 \leq t \leq L \) are any two shortest geodesics in the class \( \alpha \) . Being members of the same homotopy class, they are homotopic to each other through intermediate paths with endpoints \( p \) and \( q \) . By Lemma 9.3.2, we must have \( {\sigma }_{0} = {\sigma }_{1} \) .
## Exercises
## Exercise 9.3.1:
(a) Give a proof of the Covering Homotopy theorem (Theorem 9.3.1). This is a standard result covered in every algebraic topology text. It is also treated in some geometry texts; see, for example, [ST]. However, consult an external reference only if absolutely necessary.
(b) What is the intuitive message behind that theorem?
Exercise 9.3.2:
(a) In case the flag curvature of our compact connected Finsler manifold is nonpositive, do you suppose that every free homotopy class of loops in \( M \) contains a unique shortest geodesic loop?
(b) Also, are there any compact, simply connected Finsler manifolds with nonpositive flag curvature?
## 9.4 The Cartan-Hadamard Theorem
Let us give another application of the Covering Homotopy theorem discussed in \( §{9.3} \) .
Theorem 9.4.1 (Cartan-Hadamard). Let \( \left( {M, F}\right) \) be any forward geo-desically complete, connected Finsler manifold of nonpositive flag curvature. Then:
(1) Geodesics in \( \left( {M, F}\right) \) do not contain conjugate points.
(2) For any fixed \( p \in M \), the exponential map \( {\exp }_{p} : {T}_{p}M \rightarrow M \) is a globally defined \( {C}^{1} \) local diffeomorphism from \( {T}_{p}M \) onto \( M \) . Furthermore, this surjection is in fact a covering projection.
(3) In case \( M \) happens to be simply connected, that exponential map \( {\exp }_{p} \) is actually a \( {C}^{1} \) diffeomorphism from the tangent space \( {T}_{p}M \) onto the manifold \( M \) .
Remark: Recall from \( §{5.3} \) that, in the general Finsler setting, the exponential map is only \( {C}^{1} \) at the origin of \( {T}_{p}M \), although it is smooth away from the origin. Exercise 5.3.5 tells us that the exponential map is smooth on the entire \( {T}_{p}M \) if and only if the Finsler structure is of Berwald type.
Proof. The first two conclusions have already been established in Propositions 9.1.2 and 9.2.2.
It remains to check that the covering projection \( {\exp }_{p} \) is injective whenever the manifold \( M \) is simply connected. We give two separate and independent arguments. The first one depends on the Covering Homotopy theorem discussed in \( §{9.3} \) . The second argument completely avoids the material in that section, but uses the concept of deck transformations instead.
* Suppose \( {\exp }_{p}\left( {v}_{0}\right) = q = {\exp }_{p}\left( {v}_{1}\right) \) . Then \( {\sigma }_{0}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{v}_{0}}\right) \) and \( {\sigma }_{1}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{v}_{1}}\right) ,0 \leq t \leq 1 \) are two geodesics in \( M \) from \( p \) to \( q \) . Since \( M \) is simply connected, it can be shown that \( {\sigma }_{0} \) is homotopic to \( {\sigma }_{1} \) through a homotopy with fixed endpoints. See Exercise 9.4.1. By Lemma 9.3.2, which follows from the Covering Homotopy theorem, we must have \( {v}_{0} = {v}_{1} \) . Thus \( {\exp }_{p} \) is injective.
* Alternatively, we can apply a standard result about covering projections to \( {\exp }_{p} \) . Note that its domain \( {T}_{p}M \) is simply connected and, its range \( M \), being a manifold, is always locally simply connected. Therefore the group of deck transformations at any \( x \in M \) is isomorphic to the fundamental group \( \pi \left( {M, x}\right) \) . However, \( M \) is by hypothesis simply connected. Thus \( \pi \left( {M, x}\right) \) contains only one element and consequently there is only one deck. That means every \( x \) has exactly one preimage under \( {\exp }_{p} \) . This again proves the injectivity of \( {\exp }_{p} \) .
Compare the treatment here with those in [Au] and [Daz].
## Exercise
Exercise 9.4.1: Let \( \supset \) and \( \subset \) be any two curves from \( p \) to \( q \), in a simply connected manifold. Prove that they are homotopic, through a homotopy with fixed endpoints \( p \) and \( q \) . Hints: visualize the following intuition.
* Go from \( p \) to \( q \) along \( \supset \), then back to \( p \) along the reverse of \( \subset \) . This defines a loop based at \( p \) . Call it \( ○ \) .
* Since \( M \) is simply connected, \( ○ \) can be shrunken down to the point \( p \), using a 1-parameter family (indexed by \( u \in \left\lbrack {0,1}\right\rbrack \) ) of loops \( {\widehat{ ○ }}_{u} \) in \( M \) based at \( p \) . Here, \( { ○ }_{0} = ○ \) and \( { ○ }_{1} \) is the constant curve at \( p \) .
* During this shrinking, the point \( q \) gives rise to a curve \( \xi = \xi \left( u\right) \) , where \( \xi \left( 0\right) = q \) and \( \xi \left( 1\right) = p \) .
* Each intermediate loop \( { ○ }_{u} \) can be described in a convenient way as follows. Go from \( p \) to \( \xi \left( u\right) \) along some \( { \supset }_{u} \), then back to \( p \) along the reverse of some \( { \subset }_{u} \) .
* For each \( u \), we travel along \( { \supset }_{u} \) from \( p \) to \( \xi \left( u\right) \), then to \( q \) along the reverse of a portion of \( \xi \) . This defines a curve from \( p \) to \( q \) . The resulting \( u \) -indexed family of curves from \( p \) to \( q \) represents a homotopy, with fixed endpoints, between \( \supset \) and the reverse of \( \xi \) . Likewise, the use of \( { \subset }_{u} \) gives a homotopy, again with fixed endpoints, between \( \subset \) and the reverse of \( \xi \) .
* Combine the two homotopies described above.
Exercise 9.4.2: In the proof of the Cartan-Hadamard theorem, we gave two arguments for the injectivity of \( {\exp }_{p} \) when \( M \) is simply connected. The second one invokes the fact that the group of deck transformations is isomorphic to \( \pi \left( {M, x}\right) \) . Prove this fact without consulting [ST].
## 9.5 Prelude to Rauch's Theorem
We now prepare to extend the comparison arguments in \( §{9.1} \) and Exercise 9.1.3 to a broader setting. The preparation involves two technical ingredients that are interesting and important in their own right.
## 9.5 A. Transplanting Vector Fields
Begin with
|
Lemma 9.3.2. Let \( \left( {M, F}\right) \) be a Finsler manifold.
- Suppose that at some \( p \in M \), the exponential map \( {\exp }_{p} : {T}_{p}M \rightarrow M \) is a covering projection.
- Let \( {\sigma }_{0}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{T}_{0}}\right) \) and \( {\sigma }_{1}\left( t\right) \mathrel{\text{:=}} {\exp }_{p}\left( {t{T}_{1}}\right) ,0 \leq t \leq L \) be any two (smooth) geodesics emanating from \( p \) and terminating at some common \( q \in M \) .
The following conclusions hold:
* If \( {\sigma }_{0} \) is homotopic to \( {\sigma }_{1} \) through a homotopy with fixed endpoints \( p \) and \( q \), then \( {T}_{0} = {T}_{1} \) (equivalently, \( {\sigma }_{0} = {\sigma }_{1} \) ).
* In particular, if \( {\sigma }_{0} \) and \( {\sigma }_{1} \) are not reparametrizations of each other, then they cannot be deformed to each other through a homotopy with fixed endpoints \( p \) and \( q \) .
|
The contrapositive of the first conclusion encompasses the second conclusion. So it suffices to establish the first one.
Suppose \( {\sigma }_{0} \) is homotopic to \( {\sigma }_{1} \), through a homotopy \( h\left( {t, u}\right) ,0 \leq t \leq L \) , \( 0 \leq u \leq 1 \) with fixed endpoints \( p \) and \( q \) . Using Theorem 9.3.1, we lift this \( h \) to a homotopy \( \widetilde{h} : \left\lbrack {0, L}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \rightarrow {T}_{p}M \) with \( \widetilde{h}\left( {t,0}\right) = t{T}_{0} \) .
By hypothesis, every \( t \) -curve of the homotopy \( h \) begins at \( p \) and ends at \( q \) . Theorem 9.3.1 assures us that correspondingly, every \( t \) -curve of the lifted homotopy \( \widetilde{h} \) begins at the origin of \( {T}_{p}M \) and ends at the tip of \( L{T}_{0} \in {T}_{p}M \) .
Note that both \( \widetilde{h}\left( {t,1}\right) \) and \( t{T}_{1} \) are lifts of \( {\sigma }_{1} \) which emanate from the origin of \( {T}_{p}M \) . So, by a corollary of Theorem 9.3.1, they must be the same. Consequently, \( \widetilde{h} \) is a homotopy between the rays \( t{T}_{0} \) and \( t{T}_{1} \), and all the intermediate \( t \) -curves share the same endpoints. However, the only way for the two rays \( t{T}_{0} \) and \( t{T}_{1},0 \leq t \leq L \), to have the same endpoints would be \( {T}_{0} = {T}_{1} \) . This is equivalent to saying that \( {\sigma }_{0} \) is actually identical to
|
Corollary 6.4.11. Let \( M \) be a compact connected oriented manifold of dimension \( {2n}, n \) odd. Then for \( G = \mathbb{Z} \) or any field \( \mathbb{F} \) of characteristic not equal to 2, \( \operatorname{rank}\left( {{K}^{n}\left( {M;G}\right) }\right) \) is even. Also, the Euler characteristic \( \chi \left( M\right) \) is even.
Proof. By Theorem 6.4.8, \( \langle \) , \( \rangle {isanonsingularskew} - {symmetricbilinearformon} \) \( {K}^{n}\left( {M;G}\right) \), so by Theorem B.2.1, \( {K}^{n}\left( {M;G}\right) \) must have even rank.
We may use any field to compute Euler characteristic. Choosing \( \mathbb{F} = \mathbb{Q} \), say, and using Poincaré duality, a short calculation shows
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{k = 0}}^{{2n}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Q}}\right)
\]
\[
= 2\left( {\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Q}}\right) }\right) - \dim {H}_{n}\left( {M;\mathbb{Q}}\right)
\]
which is even.
Definition 6.4.12. (1) Two compact connected oriented \( n \) -manifolds \( M \) and \( N \) with fundamental classes \( \left\lbrack M\right\rbrack \) and \( \left\lbrack N\right\rbrack \) are of the same oriented homotopy type (resp. same oriented homeomorphism type) if there is a homotopy equivalence (resp. a homeomorphism) \( f : M \rightarrow N \) with \( {f}_{ * }\left( \left\lbrack M\right\rbrack \right) = \left\lbrack N\right\rbrack \) .
(2) If \( M \) is a compact connected orientable manifold then a homotopy equivalence (or a homeomorphism) \( f : M \rightarrow M \) is orientation preserving (resp. orientation reversing) if \( {f}_{ * } : {H}_{n}\left( {M;\mathbb{Z}}\right) \rightarrow {H}_{n}\left( {M;\mathbb{Z}}\right) \) is multiplication by 1 (resp. by -1 ). \( \diamond \)
Corollary 6.4.13. Let \( M \) be a compact connected oriented \( n \) -manifold. The isomorphism class of the intersection form on \( M \) is an invariant of the oriented homotopy type of \( M \) .
The finest information comes from considering the intersection form over \( \mathbb{Z} \), but that may lead to difficult algebraic questions. However, the information provided by the intersection form over \( \mathbb{R} \) is still enough to obtain interesting results.
Corollary 6.4.14. Let \( M \) be a compact connected oriented manifold of dimension 2n with \( n \) even. If the signature \( \sigma \left( M\right) \neq 0 \), then there is no orientation-reversing homotopy equivalence (and hence no orientation-reversing homeomorphism) \( f \) : \( M \rightarrow M \) .
Theorem 6.4.15. Let \( M \) be a compact connected oriented manifold of dimension \( {2n} \) with \( n \) even. If the signature \( \sigma \left( M\right) \neq 0 \), then \( M \) is not the boundary of an oriented \( \left( {{2n} + 1}\right) \) -manifold.
Proof. Suppose that \( M \) is the boundary of \( {X}^{{2n} + 1} \) . Let \( V = {H}^{n}\left( {{M}^{2n};\mathbb{R}}\right) \), and let \( V \) have dimension \( t \) . We will use Lefschetz duality to find a subspace \( {V}_{0} \) of \( V \) of dimension \( t/2 \) with the restriction of the intersection form on \( M \) to \( {V}_{0} \) identically 0 . By Lemma B.2.8, this shows that \( \sigma \left( M\right) = 0 \) . (This also shows that if \( t \) is odd, \( M \) is not the boundary of an oriented \( \left( {{2n} + 1}\right) \) -manifold, but this is implied by our hypothesis, as if \( t \) is odd, \( \sigma \left( M\right) \) must be nonzero.)
Consider the diagram (we omit the coefficients \( \mathbb{R} \) )

which is commutative up to sign and where the vertical maps are isomorphisms. Then
\[
\dim {H}^{n}\left( M\right) = \dim {H}_{n}\left( M\right) = \dim \operatorname{Ker}\left( {i}_{ * }\right) + \dim \operatorname{Im}\left( {i}_{ * }\right)
\]
\[
= \dim \operatorname{Ker}\left( \delta \right) + \dim \operatorname{Im}\left( {i}_{ * }\right)
\]
\[
= \dim \operatorname{Im}\left( {i}^{ * }\right) + \dim \operatorname{Im}\left( {i}_{ * }\right)
\]
## But we have the commutative diagram of Theorem 5.5.12

with the horizontal maps isomorphisms, and this implies that
\[
\dim \operatorname{Im}\left( {i}^{ * }\right) = \dim \operatorname{Im}\left( {i}_{ * }\right)
\]
Thus we conclude that \( {V}_{0} = \operatorname{Im}\left( {i}^{ * }\right) \) is a subspace of \( V = {H}^{n}\left( M\right) \) with \( \dim {V}_{0} = \)
\( \left( {1/2}\right) \dim V \) . Now we must investigate the cup product on \( {V}_{0} \) .
Recall that \( {i}_{ * }\left( \left\lbrack M\right\rbrack \right) = 0 \in {H}_{2n}\left( X\right) \) by Corollary 6.2.36.
Now let \( \alpha ,\beta \in {V}_{0} \) so that \( \alpha = {i}^{ * }\left( \gamma \right) \) and \( \beta = {i}^{ * }\left( \delta \right) \) for \( \gamma ,\delta \in {H}^{n}\left( X\right) \) . Then
\[
\langle \alpha ,\beta \rangle = \langle \alpha \cup \beta ,\left\lbrack M\right\rbrack \rangle = \left\langle {{i}^{ * }\left( \gamma \right) \cup {i}^{ * }\left( \delta \right) ,\left\lbrack M\right\rbrack }\right\rangle
\]
\[
= \left\langle {{i}^{ * }\left( {\gamma \cup \delta }\right) ,\left\lbrack M\right\rbrack }\right\rangle = \left\langle {\gamma \cup \delta ,{i}_{ * }\left( \left\lbrack M\right\rbrack \right) }\right\rangle
\]
\[
= \langle \gamma \cup \delta ,0\rangle = 0
\]
completing the proof.
In order to give examples for these two theorems we consider the connected sum
construction of oriented manifolds, a construction that is important in its own right. The basic idea is very simple, but we will have to exercise some care to ensure that the orientation comes out right.
Definition 6.4.16. Let \( M \) and \( N \) both be compact connected oriented \( n \) -manifolds, \( n > 0 \), with fundamental classes \( \left\lbrack M\right\rbrack \) and \( \left\lbrack N\right\rbrack \) respectively. Let \( {\varphi }_{\alpha } : {\mathbb{R}}^{n} \rightarrow {U}_{\alpha } \) be a coordinate patch on \( M \) and \( {\psi }_{\beta } : {\mathbb{R}}^{n} \rightarrow {V}_{\beta } \) be a coordinate patch on \( N \) . Let \( {D}^{n} \) be the open unit ball in \( {\mathbb{R}}^{n} \) and let \( {S}^{n - 1} \) be the unit sphere in \( {\mathbb{R}}^{n} \) . Let \( {M}^{\prime } = M - {\varphi }_{\alpha }\left( {\mathring{D}}^{n}\right) \) and observe that \( {M}^{\prime } \) is a manifold with boundary \( \partial M = \varphi \left( {S}^{n - 1}\right) \) homeomorphic to \( {S}^{n - 1} \) .
We have isomorphisms on homology
\[
{H}_{n}\left( M\right) \rightarrow {H}_{n}\left( {M,{\varphi }_{\alpha }\left( {\overset{ \circ }{D}}^{n}\right) }\right) \rightarrow {H}_{n}\left( {{M}^{\prime },\partial {M}^{\prime }}\right)
\]
where the first isomorphism comes from the inclusion of pairs \( \left( {M,\varnothing }\right) \rightarrow \) \( \left( {M,{\varphi }_{\alpha }\left( {D}^{n}\right) }\right) \) and the second is the inverse of excision. (Note we can apply excision here by Theorem 3.2.7.) Let \( \left\lbrack {{M}^{\prime },\partial {M}^{\prime }}\right\rbrack \) be the fundamental class of the manifold with boundary \( {M}^{\prime } \) which is the image of the fundamental class \( \left\lbrack M\right\rbrack \) of \( M \) under this isomorphism, and let \( \left\lbrack {\partial {M}^{\prime }}\right\rbrack = \partial \left( \left\lbrack {{M}^{\prime },\partial {M}^{\prime }}\right\rbrack \right) \in {H}_{n - 1}\left( {\partial {M}^{\prime }}\right) \) . Define \( {N}^{\prime },\left\lbrack {{N}^{\prime },\partial {N}^{\prime }}\right\rbrack \), and \( \left\lbrack {\partial {N}^{\prime }}\right\rbrack \) similarly.
Let \( C = {S}^{n - 1} \times \left\lbrack {-1,1}\right\rbrack \) . Let \( {i}_{j} : {S}^{n - 1} \rightarrow {S}^{n - 1} \times \{ j\} \subset C \) for \( j = - 1,0 \) or 1 . Choose a fundamental class \( \left\lbrack {S}^{n - 1}\right\rbrack \in {H}_{n - 1}\left( {S}^{n - 1}\right) \) and let \( \left\lbrack {S}_{0}^{n - 1}\right\rbrack = {\left( {i}_{0}\right) }_{ * }\left( \left\lbrack {S}^{n - 1}\right\rbrack \right) \in {H}_{n - 1}\left( C\right) \) and \( \left\lbrack {S}_{j}^{n - 1}\right\rbrack = {\left( {i}_{j}\right) }_{ * }\left( \left\lbrack {S}^{n - 1}\right\rbrack \right) \in {H}_{n - 1}\left( {{S}^{n - 1}\times \{ j\} }\right) \) for \( j = - 1 \) or 1 . Observe that under the inclusions \( {S}^{n - 1} \times \{ j\} \rightarrow C \), the image of \( \left\lbrack {S}_{j}^{n - 1}\right\rbrack \) is \( \left\lbrack {S}_{0}^{n - 1}\right\rbrack, j = - 1,1 \) . Choose the fundamental class \( \left\lbrack {C,\partial C}\right\rbrack \in {H}_{n}\left( {C,\partial C}\right) \) so that \( \left\lbrack {\partial C}\right\rbrack = \partial \left( \left\lbrack {C,\partial C}\right\rbrack \right) = \left\lbrack {S}_{1}^{n - 1}\right\rbrack - \left\lbrack {S}_{-1}^{n - 1}\right\rbrack \in \) \( {H}_{n - 1}\left( {\partial C}\right) \) .
Now let \( {f}_{-1} : {S}^{n - 1} \times \{ - 1\} \rightarrow \partial {M}^{\prime } \) be a homeomorphism with \( {\left( {f}_{-1}\right) }_{ * }\left( \left\lbrack {S}_{-1}^{n - 1}\right\rbrack \right) = \) \( \left\lbrack {\partial {M}^{\prime }}\right\rbrack \in {H}_{n - 1}\left( {\partial {M}^{\prime }}\right) \) and let \( {f}_{1} : {S}^{n - 1} \times \{ 1\} \rightarrow \partial {N}^{\prime } \) be a homeomorphism with \( {\left( {f}_{1}\right) }_{ * }\left( \left\lbrack {S}_{1}^{n - 1}\right\rbrack \right) = - \left\lbrack {\partial {N}^{\prime }}\right\rbrack \in {H}_{n - 1}\left( {\partial {N}^{\prime }}\right) \) . The connected sum \( M\# N \) is the identification space
\[
M\# N = {M}^{\prime } \cup {N}^{\prime } \cup C/ \sim
\]
under the identification \( \left( {s, - 1}\right) \sim {f}_{-1}\left( s\right) \) and \( \left( {s,1}\right) \sim {f}_{1}\left( s\right) \) for \( s \in {S}^{n -
|
Corollary 6.4.11. Let \( M \) be a compact connected oriented manifold of dimension \( {2n}, n \) odd. Then for \( G = \mathbb{Z} \) or any field \( \mathbb{F} \) of characteristic not equal to 2, \( \operatorname{rank}\left( {{K}^{n}\left( {M;G}\right) }\right) \) is even. Also, the Euler characteristic \( \chi \left( M\right) \) is even.
|
By Theorem 6.4.8, \( \langle \) , \( \rangle \) is an nonsingular skew-symmetric bilinear form on \( {K}^{n}\left( {M;G}\right) \), so by Theorem B.2.1, \( {K}^{n}\left( {M;G}\right) \) must have even rank.
We may use any field to compute Euler characteristic. Choosing \( \mathbb{F} = \mathbb{Q} \), say, and using Poincaré duality, a short calculation shows
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{k = 0}}^{{2n}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Q}}\right)
\]
\[
= 2\left( {\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Q}}\right) }\right) - \dim {H}_{n}\left( {M;\mathbb{Q}}\right)
\]
which is even.
|
Proposition 12.22. Every continuous action of a compact topological group on a Hausdorff space is proper.
Proof. Suppose \( G \) is a compact group acting continuously on a Hausdorff space \( E \), and let \( \Theta : G \times E \rightarrow E \times E \) be the map defined by (12.5). Given a compact set \( L \subseteq E \times E \), let \( K = {\pi }_{2}\left( L\right) \), where \( {\pi }_{2} : E \times E \rightarrow E \) is the projection on the second factor. Because \( E \times E \) is Hausdorff, \( L \) is closed in \( E \times E \) . Thus \( {\Theta }^{-1}\left( L\right) \) is a closed subset of the compact set \( G \times K \), hence compact.
In Chapter 4, we gave several alternative characterizations of properness for continuous maps. Similarly, there are other useful characterizations of properness of group actions. One of the most important is the following; others are described in Problems 12-19 and 12-20.
Proposition 12.23. Suppose we are given a continuous action of a topological group \( G \) on a Hausdorff space \( E \) . The action is proper if and only if for every compact subset \( K \subseteq E \), the set \( {G}_{K} = \{ g \in G : \left( {g \cdot K}\right) \cap K \neq \varnothing \} \) is compact.
Proof. Let \( \Theta : G \times E \rightarrow E \times E \) be the map defined by (12.5). Suppose first that \( \Theta \) is proper. Then for any compact set \( K \subseteq E \), we have
\[
{G}_{K} = \{ g \in G : \text{ there exists }e \in K\text{ such that }g \cdot e \in K\}
\]
\[
= \{ g \in G\text{ : there exists }e \in E\text{ such that }\Theta \left( {g, e}\right) \in K \times K\}
\]
(12.6)
\[
= {\pi }_{G}\left( {{\Theta }^{-1}\left( {K \times K}\right) }\right)
\]
where \( {\pi }_{G} : G \times E \rightarrow G \) is the projection (Fig. 12.1). Thus \( {G}_{K} \) is compact.
Conversely, suppose \( {G}_{K} \) is compact for every compact set \( K \subseteq E \) . Given a compact subset \( L \subseteq E \times E \), let \( K = {\pi }_{1}\left( L\right) \cup {\pi }_{2}\left( L\right) \subseteq E \), where \( {\pi }_{1},{\pi }_{2} : E \times E \rightarrow E \) are the projections onto the first and second factors, respectively. Then
\[
{\Theta }^{-1}\left( L\right) \subseteq {\Theta }^{-1}\left( {K \times K}\right) = \{ \left( {g, e}\right) : g \cdot e \in K\text{ and }e \in K\} \subseteq {G}_{K} \times K.
\]
Since \( E \times E \) is Hausdorff, \( L \) is closed in \( E \times E \), and so \( {\Theta }^{-1}\left( L\right) \) is closed in \( G \times E \) by continuity. Thus \( {\Theta }^{-1}\left( L\right) \) is a closed subset of the compact set \( {G}_{K} \times K \) and is therefore compact.

Fig. 12.1: Characterizing proper actions.
The most significant fact about proper actions is that for sufficiently nice spaces, they always yield Hausdorff quotients, as the next proposition shows.
Proposition 12.24. If a topological group \( G \) acts continuously and properly on a locally compact Hausdorff space \( E \), then the orbit space \( E/G \) is Hausdorff.
Proof. Let \( \mathcal{O} \subseteq E \times E \) be the orbit relation defined in Problem 3-22. By the result of that problem, the orbit space is Hausdorff if and only if \( \mathcal{O} \) is closed in \( E \times E \) . But \( \mathcal{O} \) is just the image of the map \( \Theta : G \times E \rightarrow E \times E \) defined by (12.5). Since \( E \) is a locally compact Hausdorff space, the same is true of \( E \times E \), so it follows from Theorem 4.95 that \( \Theta \) is a closed map. Thus the orbit relation is closed and \( E/G \) is Hausdorff.
The converse of this proposition is not true: for example, if \( E \) is any locally compact Hausdorff space and \( G \) is any noncompact group acting trivially on \( E \) (meaning that \( g \cdot e = e \) for all \( g \) and all \( e \) ), then \( E/G = E \) is Hausdorff but it is easy to see that the action is not proper. Even requiring the action to be free is not enough: Problem 12-18 gives an example of a free continuous action on \( {\mathbb{R}}^{2} \) with Hausdorff quotient that is still not proper. However, we do have the following partial converse, which shows that properness is exactly the right condition for covering space actions.
Proposition 12.25. Suppose we are given a covering space action of a group \( \Gamma \) on a topological space \( E \), and \( E/\Gamma \) is Hausdorff. Then with the discrete topology, \( \Gamma \) acts properly on \( E \) .
Proof. For convenience, write \( X = E/\Gamma \), and let \( q : E \rightarrow X \) be the quotient map, which is a normal covering map by the covering space quotient theorem. It follows from Proposition 3.57 and Problem 3-22 that the orbit relation \( \mathcal{O} \) defined by (3.6) is closed in \( E \times E \) . Also, Problem 11-1(a) shows that \( E \) is Hausdorff. We use Proposition 12.23 to show that the action is proper.
Suppose \( K \subseteq E \times E \) is compact, and assume for the sake of contradiction that \( {\Gamma }_{K} \) is not compact; this means in particular that \( {\Gamma }_{K} \) is infinite. For each \( g \in {\Gamma }_{K} \), there is a point \( e \in K \) such that \( g \cdot e \in K \) . Define a map \( F : {\Gamma }_{K} \rightarrow K \times K \) by choosing one such point \( {e}_{g} \) for each \( g \), and letting \( F\left( g\right) = \left( {g \cdot {e}_{g},{e}_{g}}\right) \) . The fact that \( \Gamma \) acts freely implies that \( F \) is injective, so \( F\left( {\Gamma }_{K}\right) \) is an infinite subset of \( K \times K \) . It follows that \( F\left( {\Gamma }_{K}\right) \) has a limit point \( \left( {{x}_{0},{y}_{0}}\right) \in K \times K \) . Moreover, since \( F\left( {\Gamma }_{K}\right) \subseteq \mathcal{O} \), which is closed in \( E \times E \), we have \( \left( {{x}_{0},{y}_{0}}\right) \in \mathcal{O} \) as well, which means that there exists \( {g}_{0} \in \Gamma \) such that \( {x}_{0} = {g}_{0} \cdot {y}_{0} \) .
Now let \( U \) be a neighborhood of \( {y}_{0} \) satisfying (12.1), and set \( V = {g}_{0} \cdot U \), which is a neighborhood of \( {x}_{0} \) . The fact that \( \left( {{x}_{0},{y}_{0}}\right) \) is a limit point in the Hausdorff space \( E \times E \) means that \( V \times U \) must contain infinitely many points of \( F\left( {\Gamma }_{K}\right) \) . But for each \( g \in {\Gamma }_{K} \) such that \( F\left( g\right) = \left( {g \cdot {e}_{g},{e}_{g}}\right) \in V \times U \), we have \( g \cdot {e}_{g} \in V \cap \left( {g \cdot U}\right) = \) \( \left( {{g}_{0} \cdot U}\right) \cap \left( {g \cdot U}\right) \), which implies that \( g = {g}_{0} \) . This contradicts the fact that there are infinitely many such \( g \) .
For sufficiently nice spaces, including all connected manifolds, the next theorem shows that once we know an action is continuous, proper, and free, it is not necessary to check that it is a covering space action.
Theorem 12.26. Suppose \( E \) is a connected, locally path-connected, and locally compact Hausdorff space, and a discrete group \( \Gamma \) acts continuously, freely, and properly on \( E \) . Then the action is a covering space action, \( E/\Gamma \) is Hausdorff, and the quotient map \( q : E \rightarrow E/\Gamma \) is a normal covering map.
Proof. We need only show that the action is a covering space action, for then Proposition 12.24 shows that \( E/\Gamma \) is Hausdorff, and the covering space quotient theorem shows that \( q \) is a normal covering map.
Suppose \( {e}_{0} \in E \) is arbitrary. Because \( E \) is locally compact, \( {e}_{0} \) has a neighborhood \( V \) contained in a compact set \( K \) . By Proposition 12.23, the set \( {\Gamma }_{K} = \{ g \in \Gamma \) : \( K \cap \left( {g \cdot K}\right) \neq \varnothing \} \) is compact. Because \( \Gamma \) has the discrete topology, this means \( {\Gamma }_{K} \) is finite; let us write \( {\Gamma }_{K} = \left\{ {1,{g}_{1},\ldots ,{g}_{m}}\right\} \) . Since the action is free and \( E \) is Hausdorff, for each \( {g}_{i} \) there are disjoint neighborhoods \( {W}_{i} \) of \( {e}_{0} \) and \( {W}_{i}^{\prime } \) of \( {g}_{i} \cdot {e}_{0} \) . Let
\[
U = V \cap {W}_{1} \cap \left( {{g}_{1}^{-1} \cdot {W}_{1}^{\prime }}\right) \cap \cdots \cap {W}_{m} \cap \left( {{g}_{m}^{-1} \cdot {W}_{m}^{\prime }}\right) .
\]
We will show that \( U \) satisfies (12.1).
First consider \( g = {g}_{i} \) for some \( i \) . If \( e \in U \subseteq {g}_{i}^{-1} \cdot {W}_{i}^{\prime } \), then \( {g}_{i} \cdot e \in {W}_{i}^{\prime } \), which is disjoint from \( {W}_{i} \) and therefore from \( U \) . Thus \( U \cap \left( {{g}_{i} \cdot U}\right) = \varnothing \) . On the other hand, if \( g \in \Gamma \) is not the identity and not one of the \( {g}_{i} \) ’s, then for any \( e \in U \subseteq V \subseteq K \), we have \( g \cdot e \in g \cdot K \), which is disjoint from \( K \) and therefore also from \( U \) . Thus once again we have \( U \cap \left( {g \cdot U}\right) = \varnothing \) .
Corollary 12.27. Let \( M \) be a connected \( n \) -manifold on which a discrete group \( \Gamma \) acts continuously, freely, and properly. Then \( M/\Gamma \) is an \( n \) -manifold.
Example 12.28 (Lens Spaces). By identifying \( {\mathbb{R}}^{4} \) with \( {\mathbb{C}}^{2} \) in the usual way, we can consider \( {\mathbb{S}}^{3} \) as the following subset of \( {\mathbb{C}}^{2} \) :
\[
{\mathbb{S}}^{3} = \left\{ {\left( {{z}_{1},{z}_{2}}\right) \in {\mathbb{C}}^{2} : {\left| {z}_{1}\right| }^{2} + {\left| {z}_{2}\right| }^{2} = 1}\right\}
\]
Fix a pair of relatively prime integers \( 1 \leq m < n \), and define an action of \( \mathbb{Z}/n \) on \( {\mathbb{S}}^{3} \)
by
\[
\left\lbrack k\right\rbrack \cdot \left( {{z}_{1},{z}_{2}}\right) = \left( {{e}^{{2\pi ik}/n}{z}_{1},{e}^{{2\pi ikm}/n}{z}_{2}}\right) .
\]
It can easily be checked that this
|
Proposition 12.22. Every continuous action of a compact topological group on a Hausdorff space is proper.
|
Suppose \( G \) is a compact group acting continuously on a Hausdorff space \( E \), and let \( \Theta : G \times E \rightarrow E \times E \) be the map defined by (12.5). Given a compact set \( L \subseteq E \times E \), let \( K = {\pi }_{2}\left( L\right) \), where \( {\pi }_{2} : E \times E \rightarrow E \) is the projection on the second factor. Because \( E \times E \) is Hausdorff, \( L \) is closed in \( E \times E \). Thus \( {\Theta }^{-1}\left( L\right) \) is a closed subset of the compact set \( G \times K \), hence compact.
|
Exercise 1.1.8 Let \( {a}_{1},\ldots ,{a}_{n} \) for \( n \geq 2 \) be nonzero integers. Suppose there is a prime \( p \) and positive integer \( h \) such that \( {p}^{h} \mid {a}_{i} \) for some \( i \) and \( {p}^{h} \) does not divide \( {a}_{j} \) for all \( j \neq i \) .
Then show that
\[
S = \frac{1}{{a}_{1}} + \cdots + \frac{1}{{a}_{n}}
\]
is not an integer.
Exercise 1.1.9 Prove that if \( n \) is a composite integer, then \( n \) has a prime factor not exceeding \( \sqrt{n} \) .
Exercise 1.1.10 Show that if the smallest prime factor \( p \) of the positive integer \( n \) exceeds \( \sqrt[3]{n} \), then \( n/p \) must be prime or 1 .
Exercise 1.1.11 Let \( p \) be prime. Show that each of the binomial coefficients \( \left( \begin{array}{l} p \\ k \end{array}\right) ,1 \leq k \leq p - 1 \), is divisible by \( p \) .
Exercise 1.1.12 Prove that if \( p \) is an odd prime, then \( {2}^{p - 1} \equiv 1\left( {\;\operatorname{mod}\;p}\right) \) .
Exercise 1.1.13 Prove Fermat’s little Theorem: If \( a, p \in \mathbb{Z} \) with \( p \) a prime, and \( p \nmid a \), prove that \( {a}^{p - 1} \equiv 1\left( {\;\operatorname{mod}\;p}\right) \) .
For any integer \( n \) we define \( \phi \left( n\right) \) to be the number of positive integers less than \( n \) which are coprime to \( n \) . This is known as the Euler \( \phi \) -function.
Theorem 1.1.14 Given \( a, n \in \mathbb{Z},{a}^{\phi \left( n\right) } \equiv 1\left( {\;\operatorname{mod}\;n}\right) \) when \( \gcd \left( {a, n}\right) = 1 \) . This is a theorem due to Euler.
Proof. The case where \( n \) is prime is clearly a special case of Fermat’s little Theorem. The argument is basically the same as that of the alternate solution to Exercise 1.1.13.
Consider the ring \( \mathbb{Z}/n\mathbb{Z} \) . If \( a, n \) are coprime, then \( \bar{a} \) is a unit in this ring. The units form a multiplicative group of order \( \phi \left( n\right) \), and so clearly \( {\bar{a}}^{\phi \left( n\right) } = \overline{1} \) . Thus, \( {a}^{\phi \left( n\right) } \equiv 1\left( {\;\operatorname{mod}\;n}\right) \) .
Exercise 1.1.15 Show that \( n \mid \phi \left( {{a}^{n} - 1}\right) \) for any \( a > n \) .
Exercise 1.1.16 Show that \( n \nmid {2}^{n} - 1 \) for any natural number \( n > 1 \) .
Exercise 1.1.17 Show that
\[
\frac{\phi \left( n\right) }{n} = \mathop{\prod }\limits_{{p \mid n}}\left( {1 - \frac{1}{p}}\right)
\]
by interpreting the left-hand side as the probability that a random number chosen from \( 1 \leq a \leq n \) is coprime to \( n \) .
Exercise 1.1.18 Show that \( \phi \) is multiplicative (i.e., \( \phi \left( {mn}\right) = \phi \left( m\right) \phi \left( n\right) \) when \( \gcd \left( {m, n}\right) = 1) \) and \( \phi \left( {p}^{\alpha }\right) = {p}^{\alpha - 1}\left( {p - 1}\right) \) for \( p \) prime.
Exercise 1.1.19 Find the last two digits of \( {3}^{1000} \) .
Exercise 1.1.20 Find the last two digits of \( {2}^{1000} \) .
Let \( \pi \left( x\right) \) be the number of primes less than or equal to \( x \) . The prime number theorem asserts that
\[
\pi \left( x\right) \sim \frac{x}{\log x}
\]
as \( x \rightarrow \infty \) . This was first proved in 1896, independently by J. Hadamard and Ch. de la Vallée Poussin.
We will not prove the prime number theorem here, but derive various estimates for \( \pi \left( x\right) \) by elementary methods.
Exercise 1.1.21 Let \( {p}_{k} \) denote the \( k \) th prime. Prove that
\[
{p}_{k + 1} \leq {p}_{1}{p}_{2}\cdots {p}_{k} + 1
\]
Exercise 1.1.22 Show that
\[
{p}_{k} < {2}^{{2}^{k}}
\]
where \( {p}_{k} \) denotes the \( k \) th prime.
Exercise 1.1.23 Prove that \( \pi \left( x\right) \geq \log \left( {\log x}\right) \) .
Exercise 1.1.24 By observing that any natural number can be written as \( s{r}^{2} \) with \( s \) squarefree, show that
\[
\sqrt{x} \leq {2}^{\pi \left( x\right) }
\]
Deduce that
\[
\pi \left( x\right) \geq \frac{\log x}{2\log 2}.
\]
Exercise 1.1.25 Let \( \psi \left( x\right) = \mathop{\sum }\limits_{{{p}^{\alpha } \leq x}}\log p \) where the summation is over prime powers \( {p}^{\alpha } \leq x \) .
(i) For \( 0 \leq x \leq 1 \), show that \( x\left( {1 - x}\right) \leq \frac{1}{4} \) . Deduce that
\[
{\int }_{0}^{1}{x}^{n}{\left( 1 - x\right) }^{n}{dx} \leq \frac{1}{{4}^{n}}
\]
for every natural number \( n \) .
(ii) Show that \( {e}^{\psi \left( {{2n} + 1}\right) }{\int }_{0}^{1}{x}^{n}{\left( 1 - x\right) }^{n}{dx} \) is a positive integer. Deduce that
\[
\psi \left( {{2n} + 1}\right) \geq {2n}\log 2
\]
(iii) Prove that \( \psi \left( x\right) \geq \frac{1}{2}x\log 2 \) for \( x \geq 6 \) . Deduce that
\[
\pi \left( x\right) \geq \frac{x\log 2}{2\log x}
\]
for \( x \geq 6 \) .
Exercise 1.1.26 By observing that
\[
\mathop{\prod }\limits_{{n < p \leq {2n}}}p \mid \left( \begin{matrix} {2n} \\ n \end{matrix}\right)
\]
show that
\[
\pi \left( x\right) \leq \frac{{9x}\log 2}{\log x}
\]
for every integer \( x \geq 2 \) .
## 1.2 Applications of Unique Factorization
We begin this section with a discussion of nontrivial solutions to Diophantine equations of the form \( {x}^{l} + {y}^{m} = {z}^{n} \) . Nontrivial solutions are those for which \( {xyz} \neq 0 \) and \( \left( {x, y}\right) = \left( {x, z}\right) = \left( {y, z}\right) = 1 \) .
Exercise 1.2.1 Suppose that \( a, b, c \in \mathbb{Z} \) . If \( {ab} = {c}^{2} \) and \( \left( {a, b}\right) = 1 \), then show that \( a = {d}^{2} \) and \( b = {e}^{2} \) for some \( d, e \in \mathbb{Z} \) . More generally, if \( {ab} = {c}^{g} \) then \( a = {d}^{g} \) and \( b = {e}^{g} \) for some \( d, e \in \mathbb{Z} \) .
Exercise 1.2.2 Solve the equation \( {x}^{2} + {y}^{2} = {z}^{2} \) where \( x, y \), and \( z \) are integers and \( \left( {x, y}\right) = \left( {y, z}\right) = \left( {x, z}\right) = 1 \) .
Exercise 1.2.3 Show that \( {x}^{4} + {y}^{4} = {z}^{2} \) has no nontrivial solution. Hence deduce, with Fermat, that \( {x}^{4} + {y}^{4} = {z}^{4} \) has no nontrivial solution.
Exercise 1.2.4 Show that \( {x}^{4} - {y}^{4} = {z}^{2} \) has no nontrivial solution.
Exercise 1.2.5 Prove that if \( f\left( x\right) \in \mathbb{Z}\left\lbrack x\right\rbrack \), then \( f\left( x\right) \equiv 0\left( {\;\operatorname{mod}\;p}\right) \) is solvable for infinitely many primes \( p \) .
Exercise 1.2.6 Let \( q \) be prime. Show that there are infinitely many primes \( p \) so that \( p \equiv 1\left( {\;\operatorname{mod}\;q}\right) \) .
We will next discuss integers of the form \( {F}_{n} = {2}^{{2}^{n}} + 1 \), which are called the Fermat numbers. Fermat made the conjecture that these integers are all primes. Indeed, \( {F}_{0} = 3,{F}_{1} = 5,{F}_{2} = {17},{F}_{3} = {257} \), and \( {F}_{4} = {65537} \) are primes but unfortunately, \( {F}_{5} = {2}^{{2}^{5}} + 1 \) is divisible by 641, and so \( {F}_{5} \) is composite. It is unknown if \( {F}_{n} \) represents infinitely many primes. It is also unknown if \( {F}_{n} \) is infinitely often composite.
Exercise 1.2.7 Show that \( {F}_{n} \) divides \( {F}_{m} - 2 \) if \( n \) is less than \( m \), and from this deduce that \( {F}_{n} \) and \( {F}_{m} \) are relatively prime if \( m \neq n \) .
Exercise 1.2.8 Consider the \( n \) th Fermat number \( {F}_{n} = {2}^{{2}^{n}} + 1 \) . Prove that every prime divisor of \( {F}_{n} \) is of the form \( {2}^{n + 1}k + 1 \) .
Exercise 1.2.9 Given a natural number \( n \), let \( n = {p}_{1}^{{\alpha }_{1}}\cdots {p}_{k}^{{\alpha }_{k}} \) be its unique factorization as a product of prime powers. We define the squarefree part of \( n \) , denoted \( S\left( n\right) \), to be the product of the primes \( {p}_{i} \) for which \( {\alpha }_{i} = 1 \) . Let \( f\left( x\right) \in \mathbb{Z}\left\lbrack x\right\rbrack \) be nonconstant and monic. Show that \( \liminf S\left( {f\left( n\right) }\right) \) is unbounded as \( n \) ranges over the integers.
## 1.3 The \( {ABC} \) Conjecture
Given a natural number \( n \), let \( n = {p}_{1}^{{\alpha }_{1}}\cdots {p}_{k}^{{\alpha }_{k}} \) be its unique factorization as a product of prime powers. Define the radical of \( n \), denoted \( \operatorname{rad}\left( n\right) \), to be the product \( {p}_{1}\cdots {p}_{k} \) .
In 1980, Masser and Oesterlé formulated the following conjecture. Suppose we have three mutually coprime integers \( A, B, C \) satisfying \( A + B = C \) . Given any \( \varepsilon > 0 \), it is conjectured that there is a constant \( \kappa \left( \varepsilon \right) \) such that
\[
\max \left( {\left| A\right| ,\left| B\right| ,\left| C\right| }\right) \leq \kappa \left( \varepsilon \right) {\left( \operatorname{rad}\left( ABC\right) \right) }^{1 + \varepsilon }.
\]
This is called the \( {ABC} \) Conjecture.
Exercise 1.3.1 Assuming the \( {ABC} \) Conjecture, show that if \( {xyz} \neq 0 \) and \( {x}^{n} + \) \( {y}^{n} = {z}^{n} \) for three mutually coprime integers \( x, y \), and \( z \), then \( n \) is bounded.
[The assertion \( {x}^{n} + {y}^{n} = {z}^{n} \) for \( n \geq 3 \) implies \( {xyz} = 0 \) is the celebrated Fermat's Last Theorem conjectured in 1637 by the French mathematician Pierre de Fermat (1601-1665). After a succession of attacks beginning with Euler, Dirichlet, Legendre, Lamé, and Kummer, and culminating in the work of Frey, Serre, Ribet, and Wiles, the situation is now resolved, as of 1995. The \( {ABC} \) Conjecture is however still open.]
Exercise 1.3.2 Let \( p \) be an odd prime. Suppose that \( {2}^{n} \equiv 1\left( {\;\operatorname{mod}\;p}\right) \) and \( {2}^{n} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) . Show that \( {2}^{d} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) where \( d \) is the order of 2 \( \left( {\;\operatorname{mod}\;p}\right) \) .
Exercise 1.3.3 Assuming the \( {ABC} \) Conjecture, show that there are infinitely many primes \( p \) such that \( {2}^{p - 1} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) .
Exercise 1.3.4 Show that th
|
Exercise 1.1.8 Let \( {a}_{1},\ldots ,{a}_{n} \) for \( n \geq 2 \) be nonzero integers. Suppose there is a prime \( p \) and positive integer \( h \) such that \( {p}^{h} \mid {a}_{i} \) for some \( i \) and \( {p}^{h} \) does not divide \( {a}_{j} \) for all \( j \neq i \) .
Then show that
\[
S = \frac{1}{{a}_{1}} + \cdots + \frac{1}{{a}_{n}}
\]
is not an integer.
|
To show that \( S = \frac{1}{{a}_{1}} + \cdots + \frac{1}{{a}_{n}} \) is not an integer, we will use the given conditions about the prime \( p \) and the positive integer \( h \).
1. **Identify the condition**: There exists a prime \( p \) and a positive integer \( h \) such that \( {p}^{h} \mid {a}_{i} \) for some \( i \) and \( {p}^{h} \) does not divide \( {a}_{j} \) for all \( j \neq i \).
2. **Consider the common denominator**: Let \( D \) be the least common multiple (LCM) of all the denominators \( {a}_{1}, {a}_{2}, \ldots, {a}_{n} \). Thus, we can write:
\[
S = \frac{D}{{a}_{1}} + \frac{D}{{a}_{2}} + \cdots + \frac{D}{{a}_{n}} = \frac{D({a}_{2}{a}_{3}\cdots{a}_{n}) + D({a}_{1}{a}_{3}\cdots{a}_{n}) + \cdots + D({a}_{1}{a}_{2}\cdots{a}_{n-1})}{{a}_{1}{a}_{2}\cdots{a}_{n}}
\]
Simplifying, we get:
\[
S = \frac{\sum_{j=1}^{n} D({a}_{1}{a}_{2}\cdots{a}_{n}/{a}_{j})}{{a}_{1}{a}_{2}\cdots{a}_{n}}
\]
3. **Analyze the numerator**: The numerator is a sum of terms, each of which is divisible by \( D \). However, we need to check the divisibility by \( p^h \) in the numerator and denominator.
4. **Divisibility by \( p^h \)**: Since \( {p}^{h} \mid {a}_{i} \) for some \( i \) and \( {p}^{h} \) does not divide \( {a}_{j} \) for all \( j \neq i \), the term in the numerator corresponding to \( a_i \) will be divisible by \( p^h \), while all other terms in the numerator will not be divisible by \( p^h \). This means that the numerator as a whole is not divisible by \( p^h \).
5. **Denominator analysis**: The denominator \( {a}_{1}{a}_{2}\cdots{a}_{n} \) is divisible by \( p^h \) because it includes at least one term \( a_i \) that is divisible by \( p^h \).
6. **Conclusion**: Since the numerator is not divisible by \( p^h \) but the denominator is, the fraction cannot be an integer. Therefore, \( S \) is not an integer.
|
Lemma 10.6.1 Let \( X \) be strongly regular with eigenvalues \( k > \theta > \tau \) . Suppose that \( x \) is an eigenvector of \( {A}_{1} \) with eigenvalue \( {\sigma }_{1} \) such that \( {\mathbf{1}}^{T}x = \) 0 . If \( {Bx} = 0 \), then \( {\sigma }_{1} \in \{ \theta ,\tau \} \), and if \( {Bx} \neq 0 \), then \( \tau < {\sigma }_{1} < \theta \) .
Proof. Since \( {\mathbf{1}}^{T}x = 0 \), we have
\[
\left( {{A}_{1}^{2} - \left( {a - c}\right) {A}_{1} - \left( {k - c}\right) I}\right) x = - {B}^{T}{Bx}
\]
and since \( X \) is strongly regular with eigenvalues \( k,\theta \), and \( \tau \), we have
\[
\left( {{A}_{1}^{2} - \left( {a - c}\right) {A}_{1} - \left( {k - c}\right) I}\right) x = \left( {{A}_{1} - {\theta I}}\right) \left( {{A}_{1} - {\tau I}}\right) x.
\]
Therefore, if \( x \) is an eigenvector of \( {A}_{1} \) with eigenvalue \( {\sigma }_{1} \) ,
\[
\left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) x = - {B}^{T}{Bx}.
\]
If \( {Bx} = 0 \), then \( \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) = 0 \) and \( {\sigma }_{1} \in \{ \theta ,\tau \} \) . If \( {Bx} \neq 0 \), then \( {B}^{T}{Bx} \neq 0 \), and so \( x \) is an eigenvector for the positive semidefinite matrix \( {B}^{T}B \) with eigenvalue \( - \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) \) . It follows that \( \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) < 0 \) , whence \( \tau < {\sigma }_{1} < \theta \) .
Either using similar arguments to those above or taking complements we obtain the following result.
Lemma 10.6.2 Let \( X \) be a strongly regular graph with eigenvalues \( k > \) \( \theta > \tau \) . Suppose that \( y \) is an eigenvector of \( {A}_{2} \) with eigenvalue \( {\sigma }_{2} \) such that \( {\mathbf{1}}^{T}y = 0 \) . If \( {B}^{T}y = 0 \), then \( {\sigma }_{2} \in \{ \theta ,\tau \} \), and if \( {B}^{T}y \neq 0 \), then \( \tau < {\sigma }_{2} < \theta \) . \( ▱ \)
Theorem 10.6.3 Let \( X \) be an \( \left( {n, k, a, c}\right) \) strongly regular graph. Then \( \sigma \) is a local eigenvalue of one subconstituent of \( X \) if and only if \( a - c - \sigma \) is a local eigenvalue of the other, with equal multiplicities.
Proof. Suppose that \( {\sigma }_{1} \) is a local eigenvalue of \( {A}_{1} \) with eigenvector \( x \) . Then, since \( {\mathbf{1}}^{T}x = 0 \) ,
\[
B{A}_{1} + {A}_{2}B = \left( {a - c}\right) B + {cJ}
\]
implies that
\[
{A}_{2}{Bx} = \left( {a - c}\right) {Bx} - {\sigma }_{1}{Bx} = \left( {a - c - {\sigma }_{1}}\right) {Bx}.
\]
Therefore, since \( {Bx} \neq 0 \), it is an eigenvector of \( {A}_{2} \) with eigenvalue \( a - c - {\sigma }_{1} \) . Since \( {\mathbf{1}}^{T}B = \left( {k - 1 - a}\right) {\mathbf{1}}^{T} \), we also have \( {\mathbf{1}}^{T}{Bx} = 0 \), and so \( a - c - {\sigma }_{1} \) is a local eigenvalue for \( {A}_{2} \) .
A similar argument shows that if \( {\sigma }_{2} \) is a local eigenvalue of \( {A}_{2} \) with eigenvector \( y \), then \( a - c - {\sigma }_{2} \) is a local eigenvalue of \( {A}_{1} \) with eigenvector \( {B}^{T}y \) .
Finally, note that the mapping \( B \) from the \( {\sigma }_{1} \) -eigenspace of \( {A}_{1} \) into the \( \left( {a - c - {\sigma }_{1}}\right) \) -eigenspace of \( {A}_{2} \) is injective and the mapping \( {B}^{T} \) from the \( \left( {a - c - {\sigma }_{1}}\right) \) -eigenspace of \( {A}_{2} \) into the \( {\sigma }_{1} \) -eigenspace of \( {A}_{1} \) is also injective. Therefore, the dimension of these two subspaces is equal.
These results also give us some information about the eigenvectors of \( A \) . Since the distance partition is equitable, the three eigenvectors of the quotient matrix yield three eigenvectors of \( A \) that are constant on \( u, V\left( {X}_{1}\right) \) , and \( V\left( {X}_{2}\right) \) . The remaining eigenvectors may all be taken to sum to zero on \( u, V\left( {X}_{1}\right) \), and \( V\left( {X}_{2}\right) \) . If \( x \) is an eigenvector of \( {A}_{1} \) with eigenvector \( {\sigma }_{1} \)
that sums to zero on \( V\left( {X}_{1}\right) \), then define a vector \( z \) by
\[
z = \left( \begin{matrix} 0 \\ x \\ {\alpha Bx} \end{matrix}\right)
\]
We will show that for a suitable choice for \( \alpha \), the vector \( z \) is an eigenvector of \( A \) . If \( {Bx} = 0 \), then it is easy to see that \( z \) is an eigenvector of \( A \) with eigenvalue \( {\sigma }_{1} \), which must therefore be equal to either \( \theta \) or \( \tau \) .
If \( {Bx} \neq 0 \), then
\[
{Az} = \left( \begin{matrix} 0 \\ {\sigma }_{1}x + \alpha {B}^{T}{Bx} \\ {Bx} + \alpha {A}_{2}{Bx} \end{matrix}\right) = \left( \begin{matrix} 0 \\ \left( {{\sigma }_{1} - \alpha \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) }\right) x \\ {Bx} + \alpha {A}_{2}{Bx} \end{matrix}\right) .
\]
Now, \( {A}_{2}{Bx} = \left( {a - c - {\sigma }_{1}}\right) {Bx} \), and so by taking \( \alpha = {\left( {\sigma }_{1} - \tau \right) }^{-1} \) and recalling that \( \theta = a - c - \tau \), we deduce that
\[
{Az} = \left( \begin{matrix} 0 \\ {\theta x} \\ {\theta \alpha Bx} \end{matrix}\right)
\]
Therefore, \( z \) is an eigenvector of \( A \) with eigenvalue \( \theta \) . Taking \( \alpha = {\left( {\sigma }_{1} - \theta \right) }^{-1} \) yields an eigenvector of \( A \) with eigenvalue \( \tau \) .
We finish with a result that uses local eigenvalues to show that the Clebsch graph is unique.
Theorem 10.6.4 The Clebsch graph is the unique strongly regular graph with parameters \( \left( {{16},5,0,2}\right) \) .
Proof. Suppose that \( X \) is a \( \left( {{16},5,0,2}\right) \) strongly regular graph, which therefore has eigenvalues \( 5,1 \), and -3 . Let \( {X}_{2} \) denote the second subcon-stituent of \( X \) . This is a cubic graph on 10 vertices, and so has an eigenvalue 3 with eigenvector 1. All its other eigenvectors are orthogonal to 1. Since 0 is the only eigenvalue of the first subconstituent, the only other eigenvalues that \( {X}_{2} \) can have are \( 1, - 3 \), and the local eigenvalue -2 . Since -1 is not in this set, \( {X}_{2} \) can not have \( {K}_{4} \) as a component, and so \( {X}_{2} \) is connected. This implies that its diameter is at least two; therefore, \( {X}_{2} \) has at least three eigenvalues. Hence the spectrum of \( {X}_{2} \) is not symmetric about zero, and so \( {X}_{2} \) is not bipartite. Consequently, \( - 3 \) is not an eigenvalue of \( {X}_{2} \) . Therefore, \( {X}_{2} \) is a connected cubic graph with exactly the three eigenvalues 3, 1, and -2. By Lemma 10.2.1 it is strongly regular, and hence isomorphic to the Petersen graph. The neighbours in \( {X}_{2} \) of any fixed vertex of the first subconstituent form an independent set of size four in \( {X}_{2} \) . Because the Petersen graph has exactly five independent sets of size four, each vertex of the first subconstituent is adjacent to precisely one of these independent sets. Therefore, we conclude that \( X \) is uniquely determined by its parameters.
## 10.7 The Krein Bounds
This section is devoted to proving the following result, which gives inequalities between the parameters of a strongly regular graph. The bounds implied by these inequalities are known as the Krein bounds, as they apply to strongly regular graphs. (There are related inequalities for distance-regular graphs and, more generally, for association schemes.) The usual proof of these inequalities is much less elementary, and does not provide information about the cases where equality holds.
Theorem 10.7.1 Let \( X \) be a primitive \( \left( {n, k, a, c}\right) \) strongly regular graph, with eigenvalues \( k,\theta \), and \( \tau \) . Let \( {m}_{\theta } \) and \( {m}_{\tau } \) denote the multiplicities of \( \theta \) and \( \tau \), respectively. Then
\[
\theta {\tau }^{2} - 2{\theta }^{2}\tau - {\theta }^{2} - {k\theta } + k{\tau }^{2} + {2k\tau } \geq 0
\]
\[
{\theta }^{2}\tau - {2\theta }{\tau }^{2} - {\tau }^{2} - {k\tau } + k{\theta }^{2} + {2k\theta } \geq 0.
\]
If the first inequality is tight, then \( k \geq {m}_{\theta } \), and if the second is tight, then \( k \geq {m}_{\tau } \) . If either of the inequalities is tight, then one of the following is true:
(a) \( X \) is the 5-cycle \( {C}_{5} \) .
(b) Either \( X \) or its complement \( \bar{X} \) has all its first subconstituents empty, and all its second subconstituents strongly regular.
(c) All subconstituents of \( X \) are strongly regular.
Our proof is long, and somewhat indirect, but involves nothing deeper than an application of the Cauchy-Schwarz inequality. We break the argument into a number of lemmas. First, however, we introduce some notation which is used throughout this section. Let \( X \) be a primitive \( \left( {n, k, a, c}\right) \) strongly regular graph with eigenvalues \( k,\theta \), and \( \tau \), where we make no assumption concerning the signs of \( \theta \) and \( \tau \) (that is, either \( \theta \) or \( \tau \) may be the positive eigenvalue). Let \( u \) be an arbitrary vertex of \( X \) and let \( {X}_{1} \) and \( {X}_{2} \) be the first and second subconstituents relative to \( u \) . The adjacency matrix of \( {X}_{1} \) is denoted by \( {A}_{1} \) . We use \( m \) for \( {m}_{\theta } \) where needed in the proofs, but not the statements, of the series of lemmas.
Lemma 10.7.2 If \( k \geq {m}_{\theta } \), then \( \tau \) is an eigenvalue of the first subconstituent of \( X \) with multiplicity at least \( k - {m}_{\theta } \) .
Proof. Let \( U \) denote the space of functions on \( V\left( X\right) \) that sum to zero on each subconstituent of \( X \) relative to \( u \) . This space has dimension \( n - 3 \) . Let \( T \) be the space spanned by the eigenvectors of \( X \) with eigenvalue \( \tau \) that sum to zero on \( V\left( {X}_{1}\right) \) ; this has dimension \( n - m
|
Lemma 10.6.1 Let \( X \) be strongly regular with eigenvalues \( k > \theta > \tau \) . Suppose that \( x \) is an eigenvector of \( {A}_{1} \) with eigenvalue \( {\sigma }_{1} \) such that \( {\mathbf{1}}^{T}x = 0 \) . If \( {Bx} = 0 \), then \( {\sigma }_{1} \in \{ \theta ,\tau \} \), and if \( {Bx} \neq 0 \), then \( \tau < {\sigma }_{1} < \theta \) .
|
Since \( {\mathbf{1}}^{T}x = 0 \), we have
\[
\left( {{A}_{1}^{2} - \left( {a - c}\right) {A}_{1} - \left( {k - c}\right) I}\right) x = - {B}^{T}{Bx}
\]
and since \( X \) is strongly regular with eigenvalues \( k,\theta \), and \( \tau \), we have
\[
\left( {{A}_{1}^{2} - \left( {a - c}\right) {A}_{1} - \left( {k - c}\right) I}\right) x = \left( {{A}_{1} - {\theta I}}\right) \left( {{A}_{1} - {\tau I}}\right) x.
\]
Therefore, if \( x \) is an eigenvector of \( {A}_{1} \) with eigenvalue \( {\sigma }_{1} \) ,
\[
\left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) x = - {B}^{T}{Bx}.
\]
If \( {Bx} = 0 \), then \( \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) = 0 \) and \( {\sigma }_{1} \in \{ \theta ,\tau \} \) . If \( {Bx} \neq 0 \), then \( {B}^{T}{Bx} \neq 0 \), and so \( x \) is an eigenvector for the positive semidefinite matrix \( {B}^{T}B \) with eigenvalue \( - \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) \) . It follows that \( \left( {{\sigma }_{1} - \theta }\right) \left( {{\sigma }_{1} - \tau }\right) < 0 \) , whence \( \(\tau < {\sigma }_{1
|
Theorem 10.32. Let \( A \in B\left( {{L}_{2}\left( {\mathbb{R}}^{m}\right) }\right) \) be a real \( {}^{3} \) positivity improving selfadjoint operator. Assume that \( \parallel A\parallel \) is an eigenvalue of \( A \) . Then the multiplicity of the eigenvalue \( \parallel A\parallel \) equals 1 and there is an \( f > 0 \) that spans the eigenspace \( N\left( {\parallel A\parallel - A}\right) \) .
Proof. Assume that \( f \neq 0 \) and \( {Af} = \parallel A\parallel f \) . Since \( A \) is real, we may assume that \( f \) is real (otherwise we could replace \( f \) by \( \operatorname{Re}f \) or \( \operatorname{Im}f \), because \( A\left( {\operatorname{Re}f}\right) = A\left( {f + {Kf}}\right) /2 = \left( {{Af} + {KAf}}\right) /2 = \left( {\parallel A\parallel f + K\parallel A\parallel f}\right) /2 = \) \( \parallel A\parallel \left( {\operatorname{Re}f}\right) \) and \( A\left( {\operatorname{Im}f}\right) = \parallel A\parallel \left( {\operatorname{Im}f}\right) ) \) . From the inequality \( \pm f \leq \left| f\right| \) it follows that \( \pm {Af} \leq A\left| f\right| \) . Therefore, \( \left| {Af}\right| \leq A\left| f\right| \), and thus
\[
\langle f,{Af}\rangle \leq \langle \left| f\right| ,\left| {Af}\right| \rangle \leq \langle \left| f\right|, A\left| f\right| \rangle .
\]
This implies that
\[
\parallel A\parallel \parallel f{\parallel }^{2} = \langle f,{Af}\rangle \leq \langle \left| f\right|, A\left| f\right| \rangle \leq \parallel A\parallel \parallel \left| f\right| {\parallel }^{2} = \parallel A\parallel \parallel f{\parallel }^{2},
\]
i.e., that
\[
\langle f,{Af}\rangle = \langle \left| f\right|, A\left| f\right| \rangle
\]
Let us define \( {f}_{ + } \) and \( {f}_{ - } \) by the equalities
\[
{f}_{ + }\left( x\right) = \max \{ 0, f\left( x\right) \} ,\;{f}_{ - } = {f}_{ + } - f.
\]
Then \( \left| f\right| = {f}_{ + } + {f}_{ - } \) . Consequently,
\[
\left\langle {{f}_{ + }, A{f}_{ - }}\right\rangle = \frac{1}{4}\{ \langle \left| f\right|, A\left| f\right| \rangle - \langle f,{Af}\rangle \} = 0.
\]
Hence we have \( {f}_{ + } = 0 \) or \( {f}_{ - } = 0 \), since \( {f}_{ + } \neq 0 \) and \( {f}_{ - } \neq 0 \) imply that \( A{f}_{ - } > 0 \), and thus that \( \left\langle {{f}_{ + }, A{f}_{ - }}\right\rangle \neq 0 \) . Consequently, we have proved that: \( f \geq 0 \) or \( f \leq 0 \) . We can assume, without loss or generality, that \( f \geq 0 \) . Since \( f = \parallel A{\parallel }^{-1}{Af} \) and \( f \neq 0 \), it then follows that we even have \( f > 0 \), because \( A \) is positivity improving.
The theorem will be proved if we show that \( f \) spans the space \( N(\parallel A\parallel - \) \( A) \) . For every element \( g \) of \( N\left( {\parallel A\parallel - A}\right) \) the functions \( \operatorname{Re}g \) and \( \operatorname{Im}g \) do not change sign. Such an element can only be orthogonal to the positive element \( f \) if \( g = 0 \) . Therefore, \( N\left( {\parallel A\parallel - A}\right) = L\left( f\right) \) .
Theorem 10.33. Let \( S \) be defined as above with \( {b}_{j} = 0\left( {j = 1,2,\ldots, m}\right) \) , \( q \in {M}_{\rho ,\text{ loc }}\left( {\mathbb{R}}^{m}\right) \) and \( {q}_{ - } \in {M}_{\rho }\left( {\mathbb{R}}^{m}\right) \) for some \( \rho < 4 \) . Then \( S \) is bounded from below. If the lowest point of \( \sigma \left( S\right) \) is an eigenvalue, then it is simple.
\( {}^{3} \) Here "real" refers to the natural conjugation \( K \) on \( {L}_{2}\left( {\mathbf{R}}^{m}\right) ,\left( {Kf}\right) \left( x\right) = f{\left( x\right) }^{ * } \) (cf. Section 8.1, Example 1). Proof. By Theorem 10.29(a) the operators \( S \) and \( {S}_{0} - {q}_{ - } \) are bounded from below. The lower bound of \( {S}_{0} - {q}_{ - } \) is, at the same time, a lower bound of the operators \( {S}_{n} \) and \( S - {Q}_{n} \) used in steps 2 and 3 . These operators therefore have a common lower bound, so that Theorem 9.18(b) can be applied.
First step: Let \( {S}_{0} \) be defined as above. Then the operator \( \exp \left( {-t{S}_{0}}\right) \) is order improving for all \( t > 0 \) .
Proof of the first step. With \( {\vartheta }_{t}\left( x\right) = \exp \left( {-t{\left| x\right| }^{2}}\right) \) for \( x \in {\mathbb{R}}^{m} \) we have
\[
\exp \left( {-t{S}_{0}}\right) = {F}^{-1}{M}_{{\vartheta }_{t}}F
\]
where \( {M}_{{\vartheta }_{t}} \) denotes the operator of multiplication by \( {\vartheta }_{t} \) . Hence, by the convolution theorem (Theorem 10.7) the operator \( \exp \left( {-t{S}_{0}}\right) \) is equal to the operator of convolution by the function \( {F}^{-1}{\vartheta }_{t} \) . With \( \vartheta \left( x\right) = \) \( \exp \left( {-{\left| x\right| }^{2}/2}\right) \) we obtain from Theorem 10.2 that
\[
\left( {{F}^{-1}{\vartheta }_{t}}\right) \left( x\right) = {\left( 2\pi \right) }^{-m/2}\int {\mathrm{e}}^{\mathrm{i}{xy}}{\vartheta }_{t}\left( y\right) \mathrm{d}y
\]
\[
= {\left( 2\pi \right) }^{-m/2}\int {\mathrm{e}}^{\mathrm{i}{xy}}\vartheta \left( {\sqrt{2t}y}\right) \mathrm{d}y
\]
\[
= {\left( 2t\right) }^{-m/2}{\left( 2\pi \right) }^{-m/2}\int \exp \left( {\mathrm{i}\frac{x}{\sqrt{2t}}z}\right) \vartheta \left( z\right) \mathrm{d}z
\]
\[
= {\left( 2t\right) }^{-m/2}\left( {{F}^{-1}\vartheta }\right) \left( \frac{x}{\sqrt{2t}}\right) = {\left( 2t\right) }^{-m/2}\vartheta \left( \frac{x}{\sqrt{2t}}\right) > 0
\]
for all \( x \in {\mathbb{R}}^{m} \) and \( t > 0 \) . Since the operator of convolution by a positive function is obviously positivity improving, the assertion follows.
Second step: \( \exp \left( {-{tS}}\right) \) is positivity preserving for all \( t \geq 0 \) .
Proof of the second step. For every \( n \in \mathbb{N} \) let \( {q}_{n} \) be defined by the equality
\[
{q}_{n}\left( x\right) = \left\{ \begin{array}{lll} q\left( x\right) , & \text{ if } & \left| {q\left( x\right) }\right| \leq n \\ 0, & \text{ if } & \left| {q\left( x\right) }\right| > n \end{array}\right.
\]
Let \( {S}_{n} \) be the operator defined by \( {q}_{n} \) instead of \( q \) . By Theorem 7.41
\[
\exp \left( {-t{S}_{n}}\right) = s - \mathop{\lim }\limits_{{k \rightarrow \infty }}{\left\lbrack \exp \left( -\frac{t}{k}{S}_{0}\right) \exp \left( -\frac{t}{k}{Q}_{n}\right) \right\rbrack }^{k}
\]
for all \( t \geq 0 \), where \( {Q}_{n} \) denotes the operator of multiplication by \( {q}_{n} \) . Since every term on the right side is order preserving, it follows from this that \( \exp \left( {-t{S}_{n}}\right) \) is also positivity preserving. Since by Theorem 9.16(i) (with \( \left. {{D}_{0} = {C}_{0}^{\infty }\left( {\mathbb{R}}^{m}\right) }\right) \) we have \( {\left( \mathrm{i} - {S}_{n}\right) }^{-1} \rightarrow {\left( \mathrm{i} - S\right) }^{-1} \), it follows by Theorem 9.18(b) that
\[
\exp \left( {-t{S}_{n}}\right) \overset{s}{ \rightarrow }\exp \left( {-{tS}}\right) \;\text{ for all }t \geq 0.
\]
Therefore, \( \exp \left( {-{tS}}\right) \) is also positivity preserving for all \( t \geq 0 \) .
Third step: If \( f \geq 0, f \neq 0, g \geq 0 \) and \( g \neq 0 \), then there exists a \( t \geq 0 \) such that \( \langle f,\exp \left( {-{tS}}\right) g\rangle > 0 \) .
Proof of the third step. It is sufficient to prove that if \( f \geq 0 \) and \( f \neq 0 \) , then
\[
K\left( f\right) = \left\{ {g \in {L}_{2}\left( {\mathbb{R}}^{m}\right) : g \geq 0,\langle g,\exp \left( {-{tS}}\right) f\rangle = 0\text{ for all }t \geq 0}\right\}
\]
contains only the zero element. The set \( K\left( f\right) \) is closed. It is mapped into itself by \( \exp \left( {-{sS}}\right) \) for \( s \geq 0 \), since \( s, t \geq 0 \) and \( g \in K\left( f\right) \) imply that \( \langle \exp \left( {-{sS}}\right) g,\exp \left( {-{tS}}\right) f\rangle = \langle g,\exp \left\lbrack {-\left( {s + t}\right) S}\right\rbrack f\rangle = 0 \) . This then holds for \( \exp \left( {s{Q}_{n}}\right) \), as well: It follows from \( f \geq 0, g \geq 0,\langle g,\exp \left( {-{tS}}\right) f\rangle = 0 \) and \( \exp \left( {-{tS}}\right) f \geq 0 \) (cf. step 2) that \( g\left( x\right) \left\lbrack {\exp \left( {-{tS}}\right) f}\right\rbrack \left( x\right) = 0 \) almost everywhere; we also have then that \( \left\langle {\exp \left( {s{Q}_{n}}\right) g,\exp \left( {-{tS}}\right) f}\right\rangle = 0 \) . Since
\[
\exp \left( {-t\left( {S - {Q}_{n}}\right) }\right) = s - \mathop{\lim }\limits_{{k \rightarrow \infty }}{\left\lbrack \exp \left( -\frac{t}{k}S\right) \exp \left( \frac{t}{k}{Q}_{n}\right) \right\rbrack }^{k},
\]
the operator \( \exp \left( {-t\left( {S - {Q}_{n}}\right) }\right) \) also maps the closed set \( K\left( f\right) \) into itself. Since, moreover,
\[
\exp \left( {-t\left( {S - {Q}_{n}}\right) }\right) \overset{s}{ \rightarrow }\exp \left( {-t{S}_{0}}\right) \;\text{ for all }\;t \geq 0,
\]
this follows also for \( \exp \left( {-t{S}_{0}}\right) \) . If \( g \in K\left( f\right) \), then we therefore have \( \left\langle {g,\exp \left( {-t{S}_{0}}\right) f}\right\rangle = 0 \) for all \( t \geq 0 \) . Because \( \exp \left( {-t{S}_{0}}\right) \) is positivity improving, it follows from this that \( g = 0 \) .
Fourth step: If \( \lambda \in \mathbb{R} \) is smaller than the lower bound of \( S \), then \( {\left( S - \lambda \right) }^{-1} \) is positivity improving.
Proof of the fourth step. If \( \gamma \) is the lower bound of \( S \), then
\[
\begin{Vmatrix}{\mathrm{e}}^{-t\left( {S - \lambda }\right) }\end{Vmatrix} \leq {\mathrm{e}}^{-t\left( {\gamma - \lambda }\right) }\text{ for }t \geq 0.
\]
Consequently, the following integrals exist. For all \( f, g \in {L}_{2}\left( {\mathbb{R}}^{m}\right) \)
\[
{\int }_{0}^{\infty }{\mathrm{e}}^{\lambda t}\left\langle {g,{\mathrm{e}}^{-{tS}}f}\right\rangle \mathrm{d}t = {\int }_{0}^{\infty }\left\langle {g,{\mathrm{e}}^{-t\left( {S - \lambda }\right) }f}\right\rangle \mathrm{d}t
\]
\[
= {\int }_{0}^{\infty }{\int }_{\lbrack \lambda ,\infty )}{\mathrm{e}}^{-t\left( {s - \lambda }\
|
Theorem 10.32. Let \( A \in B\left( {{L}_{2}\left( {\mathbb{R}}^{m}\right) }\right) \) be a real \( {}^{3} \) positivity improving selfadjoint operator. Assume that \( \parallel A\parallel \) is an eigenvalue of \( A \) . Then the multiplicity of the eigenvalue \( \parallel A\parallel \) equals 1 and there is an \( f > 0 \) that spans the eigenspace \( N\left( {\parallel A\parallel - A}\right) \) .
|
Assume that \( f \neq 0 \) and \( {Af} = \parallel A\parallel f \) . Since \( A \) is real, we may assume that \( f \) is real (otherwise we could replace \( f \) by \( \operatorname{Re}f \) or \( \operatorname{Im}f \), because \( A\left( {\operatorname{Re}f}\right) = A\left( {f + {Kf}}\right) /2 = \left( {{Af} + {KAf}}\right) /2 = \left( {\parallel A\parallel f + K\parallel A\parallel f}\right) /2 = \parallel A\parallel \left( {\operatorname{Re}f}\right) \) and \( A\left( {\operatorname{Im}f}\right) = \parallel A\parallel \left( {\operatorname{Im}f}\right) ) \) . From the inequality \( \pm f \leq \left| f\right| \) it follows that \( \pm {Af} \leq A\left| f\right| \) . Therefore, \( \left| {Af}\right| \leq A\left| f\right| \), and thus
\[
\langle f,{Af}\rangle \leq \langle \left| f\right| ,\left| {Af}\right| \rangle \leq \langle \left| f\right|, A\left| f\right| \rangle .
\]
This implies that
\[
\parallel A\parallel \parallel f{\parallel }^{2} = \langle f,{Af}\rangle \leq \langle \left| f\right|, A\left| f\right| \rangle \leq \parallel A\parallel \parallel \left| f\right| {\parallel }^{2} = \parallel A\parallel \parallel f{\parallel }^{2},
\]
i.e., that
\[
\langle f,{Af}\rangle = \langle \left| f\right|, A\left| f\right| \rangle .
\]
Let us define \( {f}_{ + } \) and \( {f}_{ - } \) by the equalities
\[
{f}_{ + }\left( x\right) = max \{ 0, f
|
Theorem 8.21 METRIC TSP admits a polynomial-time 2-approximation algorithm.
Proof Applying the Jarnik-Prim Algorithm (6.9), we first find a minimum-weight spanning tree \( T \) of \( G \) . Suppose that \( C \) is a minimum-weight Hamilton cycle of \( G \) . By deleting any edge of \( C \), we obtain a Hamilton path \( P \) of \( G \) . Because \( P \) is a spanning tree of \( G \) and \( T \) is a spanning tree of minimum weight,
\[
w\left( T\right) \leq w\left( P\right) \leq w\left( C\right)
\]
We now duplicate each edge of \( T \), thereby obtaining a connected even graph \( H \) with \( V\left( H\right) = V\left( G\right) \) and \( w\left( H\right) = {2w}\left( T\right) \) . Note that this graph \( H \) is not even a subgraph of \( G \), let alone a Hamilton cycle. The idea is to transform \( H \) into a Hamilton cycle of \( G \), and to do so without increasing its weight. More precisely, we construct a sequence \( {H}_{0},{H}_{1},\ldots ,{H}_{n - 2} \) of connected even graphs, each with vertex set \( V\left( G\right) \), such that \( {H}_{0} = H,{H}_{n - 2} \) is a Hamilton cycle of \( G \), and \( w\left( {H}_{i + 1}\right) \leq w\left( {H}_{i}\right) \) , \( 0 \leq i \leq n - 3 \) . We do so by reducing the number of edges, one at a time, as follows.
Let \( {C}_{i} \) be an Euler tour of \( {H}_{i} \), where \( i < n - 2 \) . The graph \( {H}_{i} \) has \( 2\left( {n - 2}\right) - i > n \) edges, and thus has a vertex \( v \) of degree at least four. Let \( x{e}_{1}v{e}_{2}y \) be a segment of the tour \( {C}_{i} \) ; it will follow by induction that \( x \neq y \) . We replace the edges \( {e}_{1} \) and \( {e}_{2} \) of \( {C}_{i} \) by a new edge \( e \) of weight \( w\left( {xy}\right) \) linking \( x \) and \( y \), thereby bypassing \( v \) and modifying \( {C}_{i} \) to an Euler tour \( {C}_{i + 1} \) of \( {H}_{i + 1} \mathrel{\text{:=}} \left( {{H}_{i} \smallsetminus \left\{ {{e}_{1},{e}_{2}}\right\} }\right) + e \) . By the triangle inequality (8.3),
\[
w\left( {H}_{i + 1}\right) = w\left( {H}_{i}\right) - w\left( {e}_{1}\right) - w\left( {e}_{2}\right) + w\left( e\right) \leq w\left( {H}_{i}\right)
\]
The final graph \( {H}_{n - 2} \), being a connected even graph on \( n \) vertices and \( n \) edges, is a Hamilton cycle of \( G \) . Furthermore,
\[
w\left( {H}_{n - 2}\right) \leq w\left( {H}_{0}\right) = {2w}\left( T\right) \leq {2w}\left( C\right)
\]
The relevance of minimum-weight spanning trees to the Travelling Salesman Problem was first observed by Kruskal (1956). A \( \frac{3}{2} \) -approximation algorithm for METRIC TSP was found by Christofides (1976). This algorithm makes use of a polynomial-time algorithm for weighted matchings (discussed in Chapter 16; see Exercise 16.4.24). For other approaches to the Travelling Salesman Problem, see Jünger et al. (1995).
The situation with respect to the general Travelling Salesman Problem, in which the weights are not subject to the triangle inequality, is dramatically different: for any integer \( t \geq 2 \), there cannot exist a polynomial-time \( t \) -approximation algorithm for solving TSP unless \( \mathcal{P} = \mathcal{N}\mathcal{P} \) (Exercise 8.4.4). The book by Vazirani (2001) treats the topic of approximation algorithms in general. For the state of the art regarding computational aspects of TSP, we refer the reader to Applegate et al. (2007).
## Exercises
\( \star \) 8.4.1 Describe a polynomial-time 2-approximation algorithm for MAX CUT (Problem 8.19).
## 8.4.2 Euclidean TSP
The Euclidean Travelling Salesman Problem is the special case of METRIC TSP in which the vertices of the graph are points in the plane, the edges are straight-line segments linking these points, and the weight of an edge is its length. Show that, in any such graph, the minimum-weight Hamilton cycles are crossing-free (that is, no two of their edges cross).
## 8.4.3 Show that METRIC TSP is \( \mathcal{{NP}} \) -hard.
\( \star {8.4.4} \)
a) Let \( G \) be a simple graph with \( n \geq 3 \), and let \( t \) be a positive integer. Consider the weighted complete graph \( \left( {K, w}\right) \), where \( K \mathrel{\text{:=}} G \cup \bar{G} \), in which \( w\left( e\right) \mathrel{\text{:=}} 1 \) if
\( e \in E\left( G\right) \) and \( w\left( e\right) \mathrel{\text{:=}} \left( {t - 1}\right) n + 2 \) if \( e \in E\left( \bar{G}\right) \) . Show that:
i) \( \left( {K, w}\right) \) has a Hamilton cycle of weight \( n \) if and only if \( G \) has a Hamilton cycle,
ii) any Hamilton cycle of \( \left( {K, w}\right) \) of weight greater than \( n \) has weight at least \( {tn} + 1 \) .
b) Deduce that, unless \( \mathcal{P} = \mathcal{N}\mathcal{P} \), there cannot exist a polynomial-time \( t \) - approximation algorithm for solving TSP. 
## 8.5 Greedy Heuristics
A heuristic is a computational procedure, generally based on some simple rule, which intuition tells one should usually yield a good approximate solution to the problem at hand.
One particularly simple and natural class of heuristics is the class of greedy heuristics. Informally, a greedy heuristic is a procedure which selects the best current option at each stage, without regard to future consequences. As can be imagined, such an approach rarely leads to an optimal solution in each instance. However, there are cases in which the greedy approach does indeed work. In such cases, we call the procedure a greedy algorithm. The following is a prototypical example of such an algorithm.
## The Borüvka-Kruskal Algorithm
The Jarník-Prim algorithm for the Minimum-Weight Spanning Tree Problem, described in Section 6.2, starts with the root and determines a nested sequence of trees, terminating with a minimum-weight spanning tree. Another algorithm for this problem, due to Borüvka (1926a, b) and, independently, Kruskal (1956), starts with the empty spanning subgraph and finds a nested sequence of forests, terminating with an optimal tree. This sequence is constructed by adding edges, one at a time, in such a way that the edge added at each stage is one of minimum weight, subject to the condition that the resulting subgraph is still a forest. Algorithm 8.22 The Borüvka-Kruskal Algorithm
INPUT: a weighted connected graph \( G = \left( {G, w}\right) \)
Output: an optimal tree \( T = \left( {V, F}\right) \) of \( G \), and its weight \( w\left( F\right) \)
1: set \( F \mathrel{\text{:=}} \varnothing, w\left( F\right) \mathrel{\text{:=}} 0 \) ( \( F \) denotes the edge set of the current forest)
2: while there is an edge \( e \in E \smallsetminus F \) such that \( F \cup \{ e\} \) is the edge set of a forest do
3: choose such an edge \( e \) of minimum weight
4: replace \( F \) by \( F \cup \{ e\} \) and \( w\left( F\right) \) by \( w\left( F\right) + w\left( e\right) \)
5: end while
6: return \( \left( {\left( {V, F}\right), w\left( F\right) }\right) \)
Because the graph \( G \) is assumed to be connected, the forest \( \left( {V, F}\right) \) returned by the Borüvka-Kruskal Algorithm is a spanning tree of \( G \) . We call it a Borúvka-Kruskal tree. The construction of such a tree in the electric grid graph of Section 6.2 is illustrated in Figure 8.3. As before, the edges are numbered according to the order in which they are added. Observe that this tree is identical to the one returned by the Jarník-Prim Algorithm (even though its edges are selected in a different order). This is because all the edge weights in the electric grid graph happen to be distinct (see Exercise 6.2.1).

Fig. 8.3. An optimal tree returned by the Borüvka-Kruskal Algorithm
In order to implement the Borüvka-Kruskal Algorithm efficiently, one needs to be able to check easily whether a candidate edge links vertices in different components of the forest. This can be achieved by colouring vertices in the same component by the same colour and vertices in different components by distinct colours. It then suffices to check that the ends of the edge have different colours. Once the edge has been added to the forest, all the vertices in one of the two merged components are recoloured with the colour of the other component. We leave the details as an exercise (Exercise 8.5.1).
The following theorem shows that the Borüvka-Kruskal Algorithm runs correctly. Its proof resembles that of Theorem 6.10, and we leave it to the reader (Exercise 8.5.2).
Theorem 8.23 Every Borúvka-Kruskal tree is an optimal tree.
The problem of finding a maximum-weight spanning tree of a connected graph can be solved by the same approach; at each stage, instead of picking an edge of minimum weight subject to the condition that the resulting subgraph remains a forest, we pick one of maximum weight subject to the same condition (see Exercise 8.5.3). The origins of the Borüvka-Kruskal Algorithm are recounted in Nešetřil et al. (2001) and Kruskal (1997).
## Independence Systems
One can define a natural family of greedy heuristics which includes the Borüvka-Kruskal Algorithm in the framework of set systems.
A set system \( \left( {V,\mathcal{F}}\right) \) is called an independence system on \( V \) if \( \mathcal{F} \) is nonempty and, for any member \( F \) of \( \mathcal{F} \), all subsets of \( F \) also belong to \( \mathcal{F} \) . The members of \( \mathcal{F} \) are then referred to as independent sets and their maximal elements as bases. (The independent sets of a matroid, defined in Section 4.4, form an independence system.)
Many independence systems can be defined on graphs. For example, if \( G = \) \( \left( {V, E}\right) \) is a connected graph, we may define an independence system on \( V \) by taking as independent sets the cliques of \( G \) (including the empty set). Likewise, we may define an independence system on \( E \)
|
Theorem 8.21 METRIC TSP admits a polynomial-time 2-approximation algorithm.
|
Applying the Jarnik-Prim Algorithm (6.9), we first find a minimum-weight spanning tree \( T \) of \( G \). Suppose that \( C \) is a minimum-weight Hamilton cycle of \( G \). By deleting any edge of \( C \), we obtain a Hamilton path \( P \) of \( G \). Because \( P \) is a spanning tree of \( G \) and \( T \) is a spanning tree of minimum weight,
\[
w\left( T\right) \leq w\left( P\right) \leq w\left( C\right)
\]
We now duplicate each edge of \( T \), thereby obtaining a connected even graph \( H \) with \( V\left( H\right) = V\left( G\right) \) and \( w\left( H\right) = {2w}\left( T\right) \). Note that this graph \( H \) is not even a subgraph of \( G \), let alone a Hamilton cycle. The idea is to transform \( H \) into a Hamilton cycle of \( G \), and to do so without increasing its weight. More precisely, we construct a sequence \( {H}_{0},{H}_{1},\ldots ,{H}_{n - 2} \) of connected even graphs, each with vertex set \( V\left( G\right) \), such that \( {H}_{0} = H,{H}_{n - 2} \) is a Hamilton cycle of \( G \), and \( w\left( {H}_{i + 1}\right) \leq w\left( {H}_{i}\right) \), \( 0 \leq i \leq n - 3 \). We do so by reducing the number of edges, one at a time, as follows.
Let \( {C}_{i} \) be an Euler tour of \( {H}_{i} \), where \( i < n - 2 \). The graph \( {H}_{i} \) has \( 2\left( {n - 2}\right) - i > n \) edges, and thus has a vertex \( v \) of degree at least four. Let \( x{e}_{1}v{e}_{2}y \) be a segment of the tour \( {C}_{i} \); it will follow by induction that \( x \neq y \). We replace the edges \( {e}_{1} \) and \( {e}_{2} \) of \( {C}_{i} \) by a new edge \( e \) of weight \( w\left( {xy}\right) \) linking \( x \) and \( y \), thereby bypassing \( v \) and modifying \( {C}_{i} \) to an Euler tour \( {C}_{i + 1} \) of \( {H}_{i + 1} \mathrel{\text{:=}} \left( {{H}_{i} \smallsetminus \left\{ {{e}_{1},{e}_{2}}\right\} }\right) + e \). By the triangle inequality (8.3),
\[
w\left( {H}_{i + 1}\right) = w\left( {H}_{i}\right) - w\left( {e}_{1}\right) - w\left( {e}_{2}\right) + w\left( e\right) \leq w\left( {H}_{i}\right)
\]
The final graph \( {H}_{n - 2} \), being a connected even graph on \( n \) vertices and \( n \) edges, is a Hamilton cycle of \( G \). Furthermore,
\[
w\left( {H}_{n - 2}\right) \leq w\left( {H}_{0}\right) = {2w}\left( T\right) \leq {2w}\left( C\right)
\]
|
Corollary 11.3.2. There exists a point \( x \in M \) such that \( G \cdot x \) is closed in \( M \) . Proof. Let \( y \in M \) and let \( Y \) be the closure of \( G \cdot y \) . Then \( G \cdot y \) is open in \( Y \), by the argument in the proof of Theorem 11.3.1, and hence \( Z = Y - G \cdot y \) is closed in \( Y \) Thus \( Z \) is quasiprojective. Furthermore, \( \dim Z < \dim Y \) by Theorem A.1.19, and \( Z \) is a union of orbits. This implies that an orbit of minimal dimension is closed.
Here is a converse to Theorem 11.3.1.
Theorem 11.3.3. Let \( H \) be a closed subgroup of a linear algebraic group \( G \) .
1. There exist a regular action of \( G \) on \( {\mathbb{P}}^{n} \) and a point \( {x}_{0} \in \mathbb{P}\left( V\right) \) such that \( H \) is the stabilizer of \( {x}_{0} \) . The map \( g \mapsto g \cdot {x}_{0} \) is a bijection from the coset space \( G/H \) to the orbit \( G \cdot {x}_{0} \) . This map endows the set \( G/H \) with a structure of a quasi-algebraic variety that (up to regular isomorphism) is independent of the choices made.
2. The quotient map from \( G \) to \( G/H \) is regular.
3. If \( G \) acts algebraically on a quasiprojective algebraic set \( M \) and \( x \) is a point of \( M \) such that \( H \subset {G}_{x} \), then the map \( {gH} \mapsto g \cdot x \) from \( G/H \) to the orbit \( G \cdot x \) is regular.
Proof. The first assertion in (1) follows from Theorem 11.1.13. The independence of choices and the proofs of (2) and (3) follow by arguments similar to the proof of Theorem 11.1.15, taking into account the validity of Theorem A.2.9 for projective algebraic sets (cf. the remarks at the end of Section A.4.3).
## 11.3.2 Flag Manifolds
Let \( V \) be a finite-dimensional complex vector space, and let \( \mathop{\bigwedge }\limits^{p}V \) be the \( p \) th exterior power of \( V \) . We call an element of this space a \( p \) -vector. Given a \( p \) -vector \( u \), we define a linear map
\[
{T}_{u} : V \rightarrow \mathop{\bigwedge }\limits^{{p + 1}}V
\]
by \( {T}_{u}v = u \land v \) for \( v \in V \) . Set
\[
V\left( u\right) = \{ v \in V : u \land v = 0\} = \operatorname{Ker}\left( {T}_{u}\right)
\]
(the annihilator of \( u \) in \( V \) ). The nonzero \( p \) -vectors of the form \( {v}_{1} \land \cdots \land {v}_{p} \), with \( {v}_{i} \in V \), are called decomposable.
Lemma 11.3.4. Let \( \dim V = n \) .
1. Let \( 0 \neq u \in \mathop{\bigwedge }\limits^{p}V \) . Then \( \dim V\left( u\right) \leq p \) and \( \operatorname{Rank}\left( {T}_{u}\right) \geq n - p \) . Furthermore, \( \operatorname{Rank}\left( {T}_{u}\right) = n - p \) if and only if \( u \) is decomposable.
2. Suppose \( u = {v}_{1} \land \cdots \land {v}_{p} \) is decomposable. Then \( V\left( u\right) = \operatorname{Span}\left\{ {{v}_{1},\ldots ,{v}_{p}}\right\} \) . Furthermore, if \( V\left( u\right) = V\left( w\right) \) then \( w = {cu} \) for some \( c \in {\mathbb{C}}^{ \times } \) . Hence the subspace \( V\left( u\right) \subset V \) determines the point \( \left\lbrack u\right\rbrack \in \mathbb{P}\left( {\mathop{\bigwedge }\limits^{p}V}\right) \) .
3. Let \( 0 < p < l < n \) . Suppose \( 0 \neq u \in \mathop{\bigwedge }\limits^{p}V \) and \( 0 \neq w \in \mathop{\bigwedge }\limits^{l}V \) are decomposable. Then \( V\left( u\right) \subset V\left( w\right) \) if and only if \( \operatorname{Rank}\left( {{T}_{u} \oplus {T}_{w}}\right) \) is a minimum, namely \( n - p \) .
Proof. Let \( \left\{ {{v}_{1},\ldots ,{v}_{m}}\right\} \) be a basis for \( V\left( u\right) \) . We complete it to a basis for \( V \), and we write
\[
u = \mathop{\sum }\limits_{J}{c}_{J}{v}_{J}
\]
where \( {v}_{J} = {v}_{{j}_{1}} \land \cdots \land {v}_{{j}_{p}} \) for \( J \) a \( p \) -tuple with \( {j}_{1} < \cdots < {j}_{p} \) and \( {c}_{J} \in \mathbb{C} \) . When \( 1 \leq j \leq m \) we have
\[
0 = u \land {v}_{j} = \mathop{\sum }\limits_{J}{c}_{J}{v}_{J} \land {v}_{j}
\]
But \( {v}_{J} \land {v}_{j} = 0 \) if \( j \) occurs in \( J \), whereas for fixed \( j \) the set
\[
\left\{ {{v}_{J} \land {v}_{j} : j \notin J,\left| J\right| = p}\right\}
\]
is linearly independent. Hence \( {c}_{J} \neq 0 \) implies that \( J \) includes all the indices \( j = \) \( 1,2,\ldots, m \) . In particular, \( m \leq p \) . If \( m = p \) then \( {c}_{J} \neq 0 \) implies that \( J = \left( {1,\ldots, p}\right) \), and hence \( u = c{v}_{1} \land \cdots \land {v}_{p} \) . This proves parts (1) and (2). Part (3) then follows from the fact that \( V\left( u\right) \subset V\left( w\right) \) if and only if \( \operatorname{Ker}\left( {{T}_{u} \oplus {T}_{w}}\right) = \operatorname{Ker}\left( {T}_{u}\right) \) .
Denote the set of all \( p \) -dimensional subspaces of \( V \) by \( {\operatorname{Grass}}_{p}\left( V\right) \) (the \( p \) th Grass-mannian manifold). Using part (2) of Lemma 11.3.4, we identify \( {\operatorname{Grass}}_{p}\left( V\right) \) with the subset of the projective space \( \mathbb{P}\left( {\mathop{\bigwedge }\limits^{k}V}\right) \) corresponding to the decomposable \( p \) - vectors.
Proposition 11.3.5. \( {\operatorname{Grass}}_{p}\left( V\right) \) is an irreducible projective algebraic set.
Proof. We use the notation of Lemma 11.3.4. If \( u \) is a \( p \) -vector, then \( \operatorname{Rank}\left( {T}_{u}\right) = \) \( n - \dim V\left( u\right) \geq n - p \) . Hence the \( p \) -vectors \( u \neq 0 \) with \( \dim V\left( u\right) = p \) are those for which all minors of size \( n - p + 1 \) in \( {T}_{u} \) vanish. These minors are homogeneous polynomials in the components of \( u \) (relative to a fixed basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) for \( V \) ), so we see that the set of decomposable \( p \) -vectors is the zero set of a family of homogeneous polynomials. Hence \( {\operatorname{Grass}}_{p}\left( V\right) \) is a closed subset of \( \mathbb{P}\left( {\mathop{\bigwedge }\limits^{p}V}\right) \) . We map
\[
\mathbf{{GL}}\left( V\right) \rightarrow {\operatorname{Grass}}_{p}\left( V\right) \;\text{ by }\;g \mapsto \left\lbrack {g{e}_{1} \land \cdots \land g{e}_{p}}\right\rbrack .
\]
This is clearly a regular surjective mapping, so the irreducibility of \( \mathbf{{GL}}\left( V\right) \) implies that \( {\operatorname{Grass}}_{p}\left( V\right) \) is also irreducible.
Take \( V = {\mathbb{C}}^{n} \) and let \( X \subset {M}_{n, p} \) be the open subset of \( n \times p \) matrices of maximal rank \( p \) . The \( p \) -dimensional subspaces of \( V \) then correspond to the column spaces of matrices \( x \in X \) . Since \( x, y \in X \) have the same column space if and only if \( x = {yg} \) for some \( g \in \mathbf{{GL}}\left( {k,\mathbb{C}}\right) \), we may view \( {\operatorname{Grass}}_{p}\left( V\right) \) as the space of orbits of \( \mathbf{{GL}}\left( {p,\mathbb{C}}\right) \) on \( X \) . That is, we introduce the equivalence relation \( x \sim y \) if \( x = {yg} \) ; then \( {\operatorname{Grass}}_{p}\left( V\right) \) is the set of equivalence classes.
For \( p = 1 \) this is the usual model of \( {\operatorname{Grass}}_{1}\left( {\mathbb{C}}^{n}\right) = {\mathbb{P}}^{n - 1} \) (see Section A.4.1). For any \( p \) it leads to a covering of \( {\operatorname{Grass}}_{p}\left( V\right) \) by affine coordinate patches, just as in the case of projective space, as follows: For \( J = \left( {{i}_{1},\ldots ,{i}_{p}}\right) \) with \( 1 \leq {i}_{1} < \cdots < {i}_{p} \leq n \) ,
let
\[
{\mathbf{\xi }}_{J}\left( x\right) = \det \left\lbrack \begin{matrix} {x}_{{i}_{1}1} & \cdots & {x}_{{i}_{1}p} \\ \vdots & \ddots & \vdots \\ {x}_{{i}_{p}1} & \cdots & {x}_{{i}_{p}p} \end{matrix}\right\rbrack
\]
be the minor determinant formed from rows \( {i}_{1},\ldots ,{i}_{p} \) of \( x \in {M}_{n, p} \) . Set
\[
{X}_{J} = \left\{ {x \in {M}_{n, p} : {\xi }_{J}\left( x\right) \neq 0}\right\} .
\]
As \( J \) ranges over all increasing \( p \) -tuples the sets \( {X}_{J} \) cover \( X \) . The homogeneous polynomials \( {\xi }_{J} \) are the so-called Plücker coordinates on \( X \) (the restriction to \( X \) of the homogeneous linear coordinates on \( \mathop{\bigwedge }\limits^{P}{\mathbb{C}}^{n} \) relative to the standard basis). Under right multiplication they transform by \( {\xi }_{J}\left( {xg}\right) = {\xi }_{J}\left( x\right) \det g \) for \( g \in \mathbf{{GL}}\left( {p,\mathbb{C}}\right) \) ; thus the ratios of the Plücker coordinates are rational functions on \( {\operatorname{Grass}}_{p}\left( V\right) \) .
Every matrix in \( {X}_{J} \) is equivalent (under the right \( \mathbf{{GL}}\left( {p,\mathbb{C}}\right) \) action) to a matrix in the affine-linear subspace
\[
{A}_{J} = \left\{ {x \in {M}_{n, p} : {x}_{{i}_{r}s} = {\delta }_{rs}\text{ for }r, s = 1,\ldots, p}\right\} .
\]
Clearly, if \( x, y \in {A}_{J} \) and \( x \sim y \) then \( x = y \) . Furthermore, \( {\xi }_{J} = 1 \) on \( {A}_{J} \) and the \( p\left( {n - p}\right) \) matrix coordinates \( \left\{ {{x}_{rs} : r \notin J}\right\} \) are the restrictions to \( {A}_{J} \) of certain Plücker coordinates. For example, let \( J = \left( {1,2,\ldots, p}\right) \) . Then \( x \in {A}_{J} \) is of the form
\[
x = \left\lbrack \begin{matrix} 1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 1 \\ {x}_{p + 1,1} & \cdots & {x}_{p + 1, p} \\ \vdots & \ddots & \vdots \\ {x}_{n1} & \cdots & {x}_{np} \end{matrix}\right\rbrack .
\]
Given \( 1 \leq s \leq p \) and \( p < r \leq n \), we set \( L = \left( {1,\ldots ,\widehat{s},\ldots, p, r}\right) \) (omit \( s \) ). Then \( {\xi }_{L}\left( x\right) = \pm {x}_{rs} \) for \( x \in {A}_{J} \), as we see by column interchanges. In particular,
\[
\dim {\operatorname{Grass}}_{p}\left( {\mathbb{C}}^{n}\right) = \left( {n - p}\right) p.
\]
Suppose that \( \omega \) is a bilinear form on \( V \) (either symmetric or skew-symmetric). Recall that a subspace \( W \subset V \) is isotropic relative to \( \omega \) i
|
Corollary 11.3.2. There exists a point \( x \in M \) such that \( G \cdot x \) is closed in \( M \).
|
Let \( y \in M \) and let \( Y \) be the closure of \( G \cdot y \). Then \( G \cdot y \) is open in \( Y \), by the argument in the proof of Theorem 11.3.1, and hence \( Z = Y - G \cdot y \) is closed in \( Y \). Thus \( Z \) is quasiprojective. Furthermore, \( \dim Z < \dim Y \) by Theorem A.1.19, and \( Z \) is a union of orbits. This implies that an orbit of minimal dimension is closed.
|
Corollary 3.50. If \( K \) is a compact subset of a domain \( D \) and \( f \) is a nonconstant function that has a power series expansion at each point of \( D \), then \( f \) has finitely many zeros in \( K \) .
Definition 3.51. Let \( c \in \mathbb{C} \) . Assume that
\[
f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{\left( z - c\right) }^{n}\text{ for all }\left| {z - c}\right| < \rho \text{ and for some }\rho > 0.
\]
If \( f \) is not identically zero, it follows from Theorem 3.45 that there exists an \( N \in \) \( {\mathbb{Z}}_{ \geq 0} \) such that
\[
{a}_{N} \neq 0\text{and}{a}_{n} = 0\text{for all}n\text{such that}0 \leq n < N\text{.}
\]
Thus
\[
f\left( z\right) = {\left( z - c\right) }^{N}\mathop{\sum }\limits_{{p = 0}}^{\infty }{a}_{N + p}{\left( z - c\right) }^{p} = {\left( z - c\right) }^{N}g\left( z\right) ,
\]
with \( g \) having a power series expansion at \( c \) and \( g\left( c\right) \neq 0 \) . We define
\[
N = {v}_{c}\left( f\right) = \text{ order }\left( \text{ of the zero }\right) \text{ of }f\text{ at }c.
\]
Note that \( N \geq 0 \), and \( N = 0 \) if and only if \( f\left( c\right) \neq 0 \) . If \( N = 1 \), then we say that \( f \) has a simple zero at \( c \) .
Definition 3.52. (a) Let \( f \) be defined in a deleted neighborhood of \( c \in \mathbb{C} \) (see the
Standard Terminology summary). We say that
\[
\mathop{\lim }\limits_{{z \rightarrow c}}f\left( z\right) = \infty
\]
if for all \( M > 0 \), there exists a \( \delta > 0 \) such that
\[
0 < \left| {z - c}\right| < \delta \Rightarrow \left| {f\left( z\right) }\right| > M.
\]
(b) Let \( \alpha \in \widehat{\mathbb{C}} \), and let \( f \) be defined in \( \left| z\right| > M \) for some \( M > 0 \) (equivalently, we say that \( f \) is defined in a deleted neighborhood of \( \infty \) in \( \widehat{\mathbb{C}} \) ). We say
\[
\mathop{\lim }\limits_{{z \rightarrow \infty }}f\left( z\right) = \alpha
\]
provided
\[
\mathop{\lim }\limits_{{z \rightarrow 0}}f\left( \frac{1}{z}\right) = \alpha
\]
(c) The above defines the concept of continuous maps between sets in the Riemann sphere \( \widehat{\mathbb{C}} \) .
(d) A function \( f \) defined in a neighborhood of \( \infty \) is holomorphic (has a power series expansion) at \( \infty \) if and only if \( g\left( z\right) = f\left( \frac{1}{z}\right) \) is holomorphic (has a power series expansion) at \( z = 0 \), where we define \( g\left( 0\right) = f\left( \infty \right) \) .
Definition 3.53. Let \( U \subset \mathbb{C} \) be a neighborhood of a point \( c \) . A function \( f \) that is holomorphic in \( {U}^{\prime } = U - \{ c\} \), a deleted neighborhood of the point \( c \), has a removable singularity at \( c \) if there is a holomorphic function in \( U \) that agrees with \( f \) on \( {U}^{\prime } \) . Otherwise \( c \) is called a singularity of \( f \) . Note that all singularities are isolated points.
Let us consider two functions \( f \) and \( g \) having power series expansions at each point of a domain \( D \) in \( \widehat{\mathbb{C}} \) . Assume that neither function vanishes identically on \( D \) and fix \( c \in D \cap \mathbb{C} \) . Let
\[
F\left( z\right) = \frac{f\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( f\right) }}\text{ and }G\left( z\right) = \frac{g\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( g\right) }}
\]
for \( z \in D \) . Then the functions \( F \) and \( G \) have removable singularities at \( c \), do not vanish there, and have power series expansions at each point of \( D \) . Furthermore, we define a new function \( h \) on \( D \) by
\[
h\left( z\right) = \frac{f}{g}\left( z\right) = \frac{{\left( z - c\right) }^{{v}_{c}\left( f\right) }F\left( z\right) }{{\left( z - c\right) }^{{v}_{c}\left( g\right) }G\left( z\right) }\text{ for all }z \in D
\]
and fixed \( c \in D \cap \mathbb{C} \) .
There are exactly three distinct possibilities for the behavior of the function \( h \) at \( z = c \), which lead to the following definitions.
Definition 3.54. (I) If \( {v}_{c}\left( g\right) > {v}_{c}\left( f\right) \), then \( h\left( c\right) = \infty \) (this defines \( h\left( c\right) \), and the resulting function \( h \) is continuous at \( c \) ). We say that \( h \) has a pole of order \( {v}_{c}\left( g\right) - {v}_{c}\left( f\right) \) at \( c \) . If \( {v}_{c}\left( g\right) - {v}_{c}\left( f\right) = 1 \), we say that the pole is simple.
(II) If \( {v}_{c}\left( g\right) = {v}_{c}\left( f\right) \), then the singularity of \( h \) at \( c \) is removable, and, by definition, \( h\left( c\right) = \frac{F\left( c\right) }{G\left( c\right) } \neq 0 \) .
(III) If \( {v}_{c}\left( g\right) < {v}_{c}\left( f\right) \), then the singularity is again removable and in this case \( h\left( c\right) = 0 \) .
In all cases we set \( {v}_{c}\left( h\right) = {v}_{c}\left( f\right) - {v}_{c}\left( g\right) \) and call it the order or multiplicity of \( h \) at \( c \) .
In cases (II) and (III) of the definition, \( h \) has a power series expansion at \( c \) as a consequence of the following result.
Theorem 3.55. If a function \( f \) has a power series expansion at \( c \) and \( f\left( c\right) \neq 0 \) , then \( \frac{1}{f} \) also has a power series expansion at \( c \) .
Proof. Without loss of generality we assume \( c = 0 \) and \( f\left( 0\right) = 1 \) . Thus
\[
f\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{z}^{n},{a}_{0} = 1,
\]
and the radius of convergence of the series is nonzero. We want to find the reciprocal power series, that is, a series \( g \) with positive radius of convergence, that we write as
\[
g\left( z\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{b}_{n}{z}^{n}
\]
and satisfies
\[
\left( {\sum {a}_{n}{z}^{n}}\right) \left( {\sum {b}_{n}{z}^{n}}\right) = 1
\]
The LHS and the RHS are both power series, where the RHS is a power series expansion whose coefficients are all equal to zero except for the first one. Equating the first two coefficients on both sides, we obtain
\[
{a}_{0}{b}_{0} = 1\text{, from where}{b}_{0} = 1\text{, and}
\]
\[
{a}_{1}{b}_{0} + {a}_{0}{b}_{1} = 0,\;\text{ from where }{b}_{1} = - {a}_{1}{b}_{0} = - {a}_{1}.
\]
Similarly, using the \( n \) -th coefficient of the power series when expanded for the LHS, for \( n \geq 1 \), we obtain
\[
{a}_{n}{b}_{0} + {a}_{n - 1}{b}_{1} + \cdots + {a}_{0}{b}_{n} = 0.
\]
Thus by induction we define
\[
{b}_{n} = - \mathop{\sum }\limits_{{j = 0}}^{{n - 1}}{b}_{j}{a}_{n - j}, n \geq 1.
\]
Since \( \rho > 0 \), we have \( \frac{1}{\rho } < + \infty \) . Since \( \mathop{\limsup }\limits_{n}{\left| {a}_{n}\right| }^{\frac{1}{n}} = \frac{1}{\rho } \), there exists a positive number \( k \) such that \( \left| {a}_{n}\right| \leq {k}^{n} \) .
We show by the use of induction, once again, that \( \left| {b}_{n}\right| \leq {2}^{n - 1}{k}^{n} \) for all \( n \geq 1 \) . For \( n = 1 \), we have \( {b}_{1} = - {a}_{1} \) and hence \( \left| {b}_{1}\right| = \left| {a}_{1}\right| \leq k \) . Suppose the inequality holds for \( 1 \leq j \leq n \) for some \( n \geq 1 \) . Then
\[
\left| {b}_{n + 1}\right| \leq \mathop{\sum }\limits_{{j = 0}}^{n}\left| {b}_{j}\right| \left| {a}_{n + 1 - j}\right| = \left| {a}_{n + 1}\right| + \mathop{\sum }\limits_{{j = 1}}^{n}\left| {b}_{j}\right| \left| {a}_{n + 1 - j}\right|
\]
\[
\leq {k}^{n + 1} + \mathop{\sum }\limits_{{j = 1}}^{n}{2}^{j - 1}{k}^{j}{k}^{n + 1 - j}
\]
\[
= {k}^{n + 1}\left( {1 + {2}^{n} - 1}\right) \text{.}
\]
Thus there is a reciprocal series, with radius of convergence \( \sigma \) satisfying
\[
\frac{1}{\sigma } = \mathop{\limsup }\limits_{n}{\left| {b}_{n}\right| }^{\frac{1}{n}} \leq \mathop{\lim }\limits_{n}\left( {2}^{1 - \frac{1}{n}}\right) k = {2k}
\]
and therefore nonzero.
Corollary 3.56. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) and \( f \) a function defined on \( D \) . If \( f \) has a power series expansion at each point of \( D \) and \( f\left( z\right) \neq 0 \) for all \( z \in D \), then \( \frac{1}{f} \) has a power series expansion at each point of \( D \) .
Definition 3.57. For each domain \( D \subseteq \widehat{\mathbb{C}} \), we define
\( \mathbf{H}\left( D\right) = \{ f : D \rightarrow \mathbb{C};f \) has a power series expansion at each point of \( D\} . \)
We will see in Chap. 5 that \( \mathbf{H}\left( D\right) \) is the set of holomorphic functions on \( D \) .
Corollary 3.58. Assume that \( D \) is a domain in \( \widehat{\mathbb{C}} \) . The set \( \mathbf{H}\left( D\right) \) is an integral domain and an algebra over \( \mathbb{C} \) . Its units are the functions that never vanish on \( D \) .
Definition 3.59. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \) . A function \( f : D \rightarrow \widehat{\mathbb{C}} \) is meromorphic on \( D \) if it is locally \( {}^{9} \) the ratio of two functions having power series expansions (with the denominator not identically zero). The set of meromorphic functions on \( D \) is denoted by \( \mathbf{M}\left( D\right) \) .
Recall that, by our convention, \( \mathbf{M}{\left( D\right) }_{ \neq 0} \) is the set of meromorphic functions with the constant function 0 omitted, where \( 0\left( z\right) = 0 \) for all \( z \) in \( D \) .
\( {}^{9} \) A property \( P \) is satisfied locally on an open set \( D \) if for each point \( c \in D \), there exists a neighborhood \( U \subset D \) of \( c \) such that \( P \) is satisfied in \( U \) .
Corollary 3.60. Let \( D \) be a domain in \( \widehat{\mathbb{C}} \), let \( c \) be any point in \( D \cap \mathbb{C} \), and let \( f \in \mathbf{M}{\left( D\right) }_{ \neq 0} \) . There exist a connected neighborhood \( U \) of \( c \) in \( D \), an integer \( n = {v}_{c}f \), and a unit \( g \in \mathbf{H}\left( U\right) \) such that
\[
f\left( z\right) = {\left( z - c\right) }^{n}g\left( z\right) \text{ for all }z \in U.
\]
Remark 3.61. If \( \infty \in D
|
Corollary 3.50. If \( K \) is a compact subset of a domain \( D \) and \( f \) is a nonconstant function that has a power series expansion at each point of \( D \), then \( f \) has finitely many zeros in \( K \).
|
null
|
Theorem 8.4. Let \( A \) be a finitely generated torsion-free abelian group. Then \( A \) is free.
Proof. Assume \( A \neq 0 \) . Let \( S \) be a finite set of generators, and let \( {x}_{1},\ldots ,{x}_{n} \) be a maximal subset of \( S \) having the property that whenever \( {v}_{1},\ldots ,{v}_{n} \) are integers such that
\[
{v}_{1}{x}_{1} + \cdots + {v}_{n}{x}_{n} = 0
\]
then \( {v}_{j} = 0 \) for all \( j \) . (Note that \( n \geqq 1 \) since \( A \neq 0 \) ). Let \( B \) be the subgroup generated by \( {x}_{1},\ldots ,{x}_{n} \) . Then \( B \) is free. Given \( y \in S \) there exist integers \( {m}_{1},\ldots ,{m}_{n}, m \) not all zero such that
\[
{my} + {m}_{1}{x}_{1} + \cdots + {m}_{n}{x}_{n} = 0,
\]
by the assumption of maximality on \( {x}_{1},\ldots ,{x}_{n} \) . Furthermore, \( m \neq 0 \) ; otherwise all \( {m}_{j} = 0 \) . Hence \( {my} \) lies in \( B \) . This is true for every one of a finite set of generators \( y \) of \( A \), whence there exists an integer \( m \neq 0 \) such that \( {mA} \subset B \) . The map
\[
x \mapsto {mx}
\]
of \( A \) into itself is a homomorphism, having trivial kernel since \( A \) is torsion free. Hence it is an isomorphism of \( A \) onto a subgroup of \( B \) . By Theorem 7.3 of the preceding section, we conclude that \( {mA} \) is free, whence \( A \) is free.
Theorem 8.5. Let \( A \) be a finitely generated abelian group, and let \( {A}_{\text{tor }} \) be the subgroup consisting of all elements of \( A \) having finite period. Then \( {A}_{\text{tor }} \) is finite, and \( A/{A}_{\text{tor }} \) is free. There exists a free subgroup \( B \) of \( A \) such that \( A \) is the direct sum of \( {A}_{\text{tor }} \) and \( B \) .
Proof. We recall that a finitely generated torsion abelian group is obviously finite. Let \( A \) be finitely generated by \( n \) elements, and let \( F \) be the free abelian group on \( n \) generators. By the universal property, there exists a surjective homomorphism
\[
F\overset{\varphi }{ \rightarrow }A
\]
of \( F \) onto \( A \) . The subgroup \( {\varphi }^{-1}\left( {A}_{\text{tor }}\right) \) of \( F \) is finitely generated by Theorem 7.3. Hence \( {A}_{\text{tor }} \) itself is finitely generated, hence finite.
Next, we prove that \( A/{A}_{\text{tor }} \) has no torsion. Let \( \bar{x} \) be an element of \( A/{A}_{\text{tor }} \) such that \( m\bar{x} = 0 \) for some integer \( m \neq 0 \) . Then for any representative of \( x \) of \( \bar{x} \) in \( A \), we have \( {mx} \in {A}_{\text{tor }} \), whence \( {qmx} = 0 \) for some integer \( q \neq 0 \) . Then \( x \in {A}_{\text{tor }} \), so \( \bar{x} = 0 \), and \( A/{A}_{\text{tor }} \) is torsion free. By Theorem 8.4, \( A/{A}_{\text{tor }} \) is free. We now use the lemma of Theorem 7.3 to conclude the proof.
The rank of \( A/{A}_{\text{tor }} \) is also called the rank of \( A \) .
For other contexts concerning Theorem 8.5, see the structure theorem for modules over principal rings in Chapter III, \( §7 \), and Exercises 5,6, and 7 of Chapter III.
## §9. THE DUAL GROUP
Let \( A \) be an abelian group of exponent \( m \geqq 1 \) . This means that for each element \( x \in A \) we have \( {mx} = 0 \) . Let \( {Z}_{m} \) be a cyclic group of order \( m \) . We denote by \( {A}^{ \land } \), or \( \operatorname{Hom}\left( {A,{Z}_{m}}\right) \) the group of homomorphisms of \( A \) into \( {Z}_{m} \), and call it the dual of \( A \) .
Let \( f : A \rightarrow B \) be a homomorphism of abelian groups, and assume both have exponent \( m \) . Then \( f \) induces a homomorphism
\[
{f}^{ \land } : {B}^{ \land } \rightarrow {A}^{ \land }\text{.}
\]
Namely, for each \( \psi \in {B}^{ \land } \) we define \( {f}^{ \land }\left( \psi \right) = \psi \circ f \) . It is trivially verified that \( {f}^{ \land } \) is a homomorphism. The properties
\[
{\mathrm{{id}}}^{ \land } = \mathrm{{id}}\text{ and }{\left( f \circ g\right) }^{ \land } = {g}^{ \land } \circ {f}^{ \land }
\]
are trivially verified.
Theorem 9.1. If \( A \) is a finite abelian group, expressed as a product \( A = B \times C \), then \( {A}^{ \land } \) is isomorphic to \( {B}^{ \land } \times {C}^{ \land } \) (under the mapping described below). A finite abelian group is isomorphic to its own dual.
Proof. Consider the two projections

of \( B \times C \) on its two components. We get homomorphisms

and we contend that these homomorphisms induce an isomorphism of \( {B}^{ \land } \times {C}^{ \land } \) onto \( {\left( B \times C\right) }^{ \land } \) .
In fact, let \( {\psi }_{1},{\psi }_{2} \) be in \( \operatorname{Hom}\left( {B,{Z}_{m}}\right) \) and \( \operatorname{Hom}\left( {C,{Z}_{m}}\right) \) respectively. Then \( \left( {{\psi }_{1},{\psi }_{2}}\right) \in {B}^{ \land } \times {C}^{ \land } \), and we have a corresponding element of \( {\left( B \times C\right) }^{ \land } \) by defining
\[
\left( {{\psi }_{1},{\psi }_{2}}\right) \left( {x, y}\right) = {\psi }_{1}\left( x\right) + {\psi }_{2}\left( y\right)
\]
for \( \left( {x, y}\right) \in B \times C \) . In this way we get a homomorphism
\[
{B}^{ \land } \times {C}^{ \land } \rightarrow {\left( B \times C\right) }^{ \land }.
\]
Conversely, let \( \psi \in {\left( B \times C\right) }^{ \land } \) . Then
\[
\psi \left( {x, y}\right) = \psi \left( {x,0}\right) + \psi \left( {0, y}\right) .
\]
The function \( {\psi }_{1} \) on \( B \) such that \( {\psi }_{1}\left( x\right) = \psi \left( {x,0}\right) \) is in \( {B}^{ \land } \), and similarly the function \( {\psi }_{2} \) on \( C \) such that \( {\psi }_{2}\left( y\right) = \psi \left( {0, y}\right) \) is in \( {C}^{ \land } \) . Thus we get a homomorphism
\[
{\left( B \times C\right) }^{ \land } \rightarrow {B}^{ \land } \times {C}^{ \land },
\]
which is obviously inverse to the one we defined previously. Hence we obtain an isomorphism, thereby proving the first assertion in our theorem.
We can write any finite abelian group as a product of cyclic groups. Thus to prove the second assertion, it will suffice to deal with a cyclic group.
Let \( A \) be cyclic, generated by one element \( x \) of period \( n \) . Then \( n \mid m \), and \( {Z}_{m} \) has precisely one subgroup of order \( n,{Z}_{n} \), which is cyclic (Proposition 4.3(iv)).
If \( \psi : A \rightarrow {Z}_{m} \) is a homomorphism, and \( x \) is a generator for \( A \), then the period of \( x \) is an exponent for \( \psi \left( x\right) \), so that \( \psi \left( x\right) \), and hence \( \psi \left( A\right) \), is contained in \( {Z}_{n} \) . Let \( y \) be a generator for \( {Z}_{n} \) . We have an isomorphism
\[
{\psi }_{1} : A \rightarrow {Z}_{n}
\]
such that \( {\psi }_{1}\left( x\right) = y \) . For each integer \( k \) with \( 0 \leqq k < n \) we have the homomorphism \( k{\psi }_{1} \) such that
\[
\left( {k{\psi }_{1}}\right) \left( x\right) = k \cdot {\psi }_{1}\left( x\right) = {\psi }_{1}\left( {kx}\right) .
\]
In this way we get a cyclic subgroup of \( {A}^{ \land } \) consisting of the \( n \) elements \( k{\psi }_{1} \) \( \left( {0 \leqq k < n}\right) \) . Conversely, any element \( \psi \) of \( {A}^{ \land } \) is uniquely determined by its effect on the generator \( x \), and must map \( x \) on one of the \( n \) elements \( {kx}\left( {0 \leqq k < n}\right) \) of \( {Z}_{n} \) . Hence \( \psi \) is equal to one of the maps \( k{\psi }_{1} \) . These maps constitute the full group \( {A}^{ \land } \), which is therefore cyclic of order \( n \), generated by \( {\psi }_{1} \) . This proves our theorem.
In considering the dual group, we take various cyclic groups \( {Z}_{m} \) . There are many applications where such groups occur, for instance the group of \( m \) -th roots of unity in the complex numbers, the subgroup of order \( m \) of \( \mathbf{Q}/\mathbf{Z} \), etc.
Let \( A,{A}^{\prime } \) be two abelian groups. A bilinear map of \( A \times {A}^{\prime } \) into an abelian group \( C \) is a map
\[
A \times {A}^{\prime } \rightarrow C
\]
denoted by
\[
\left( {x,{x}^{\prime }}\right) \mapsto \left\langle {x,{x}^{\prime }}\right\rangle
\]
having the following property. For each \( x \in A \) the function \( {x}^{\prime } \mapsto \left\langle {x,{x}^{\prime }}\right\rangle \) is a homomorphism, and similarly for each \( {x}^{\prime } \in {A}^{\prime } \) the function \( x \mapsto \left\langle {x,{x}^{\prime }}\right\rangle \) is a homomorphism.
As a special case of a bilinear map, we have the one given by
\[
A \times \operatorname{Hom}\left( {A, C}\right) \rightarrow C
\]
which to each pair \( \left( {x, f}\right) \) with \( x \in A \) and \( f \in \operatorname{Hom}\left( {A, C}\right) \) associates the element \( f\left( x\right) \) in \( C \) .
A bilinear map is also called a pairing.
An element \( x \in A \) is said to be orthogonal (or perpendicular) to a subset \( {S}^{\prime } \) of \( {A}^{\prime } \) if \( \left\langle {x,{x}^{\prime }}\right\rangle = 0 \) for all \( {x}^{\prime } \in {S}^{\prime } \) . It is clear that the set of \( x \in A \) orthogonal to \( {S}^{\prime } \) is a subgroup of \( A \) . We make similar definitions for elements of \( {A}^{\prime } \), orthogonal to subsets of \( A \) .
The kernel of our bilinear map on the left is the subgroup of \( A \) which is orthogonal to all of \( {A}^{\prime } \) . We define its kernel on the right similarly.
Given a bilinear map \( A \times {A}^{\prime } \rightarrow C \), let \( B,{B}^{\prime } \) be the respective kernels of our bilinear map on the left and right. An element \( {x}^{\prime } \) of \( {A}^{\prime } \) gives rise to an element of \( \operatorname{Hom}\left( {A, C}\right) \) given by \( x \mapsto \left\langle {x,{x}^{\prime }}\right\rangle \), which we shall denote b
|
Theorem 8.4. Let \( A \) be a finitely generated torsion-free abelian group. Then \( A \) is free.
|
Proof. Assume \( A \neq 0 \) . Let \( S \) be a finite set of generators, and let \( {x}_{1},\ldots ,{x}_{n} \) be a maximal subset of \( S \) having the property that whenever \( {v}_{1},\ldots ,{v}_{n} \) are integers such that
\[
{v}_{1}{x}_{1} + \cdots + {v}_{n}{x}_{n} = 0
\]
then \( {v}_{j} = 0 \) for all \( j \) . (Note that \( n \geqq 1 \) since \( A \neq 0 \) ). Let \( B \) be the subgroup generated by \( {x}_{1},\ldots ,{x}_{n} \) . Then \( B \) is free. Given \( y \in S \) there exist integers \( {m}_{1},\ldots ,{m}_{n}, m \) not all zero such that
\[
{my} + {m}_{1}{x}_{1} + \cdots + {m}_{n}{x}_{n} = 0,
\]
by the assumption of maximality on \( {x}_{1},\ldots ,{x}_{n} \) . Furthermore, \( m \neq 0 \) ; otherwise all \( {m}_{j} = 0 \) . Hence \( {my} \) lies in \( B \) . This is true for every one of a finite set of generators \( y \) of \( A \), whence there exists an integer \( m \neq 0 \) such that \( {mA} \subset B \) . The map
\[
x \mapsto {mx}
\]
of \( A \) into itself is a homomorphism, having trivial kernel since \( A \) is torsion free. Hence it is an isomorphism of \( A \) onto a subgroup of \( B \) . By Theorem 7.3 of the preceding section, we conclude that \( {mA} \) is free, whence \( A \) is free.
|
Theorem 2.9. Schönflies Theorem. Let \( e : {S}^{2} \rightarrow {S}^{3} \) be any piecewise linear embedding. Then \( {S}^{3} - e{S}^{2} \) has two components, the closure of each of which is a piecewise linear ball.
No proof will be given here for this fundamental, non-trivial result (for a proof see [81]). The piecewise linear condition has to be inserted, as there exist the famous "wild horned spheres" that are are examples of topological embeddings \( e : {S}^{2} \rightarrow {S}^{3} \) for which the complementary components are not even simply connected.
The next result considers the different ways in which a knot might be expressed as the sum of other knots. It is the basic result needed to show that the expression of a knot as a sum of prime knots is essentially unique. The technique of its proof again consists of minimising the intersection of surfaces in \( {S}^{3} \) that meet transversely in simple closed curves, but the procedure here is more sophisticated than in the proof of Theorem 2.4. In the proof, use will be made of the idea of a ball-arc pair. Such a pair is just a 3-ball containing an arc which meets the ball's boundary at just its two end points. The pair is unknotted if it is pairwise homeomorphic to \( \left( {D \times I, \star \times I}\right) \) , where \( \star \) is a point in the interior of the disc \( D \) and \( I \) is a closed interval.
Theorem 2.10. Suppose that a knot \( K \) can be expressed as \( K = P + Q \), where \( P \) is a prime knot, and that \( K \) can also be expressed as \( K = {K}_{1} + {K}_{2} \) . Then either
(a) \( {K}_{1} = P + {K}_{1}^{\prime } \) for some \( {K}_{1}^{\prime } \), and \( Q = {K}_{1}^{\prime } + {K}_{2} \), or
(b) \( {K}_{2} = P + {K}_{2}^{\prime } \) for some \( {K}_{2}^{\prime } \), and \( Q = {K}_{1} + {K}_{2}^{\prime } \) .
Proof. Let \( \sum \) be a 2-sphere in \( {S}^{3} \), meeting \( K \) transversely at two points, that demonstrates \( K \) as the sum \( {K}_{1} + {K}_{2} \) . The factorisation \( K = P + Q \) implies that there is a 3-ball \( B \) contained in \( {S}^{3} \) such that \( B \cap K \) is an arc \( \alpha \) (with \( K \) intersecting \( \partial B \) transversely at the two points \( \partial \alpha \) ) so that the ball-arc pair \( \left( {B,\alpha }\right) \) becomes, on gluing a trivial ball-arc pair to its boundary, the pair \( \left( {{S}^{3}, P}\right) \) . As in the proof of Theorem 2.4, it may be assumed, after small movements of \( \sum \), that \( \sum \) intersects \( \partial B \) transversely in a union of simple closed curves disjoint from \( K \) . The immediate aim will be to reduce \( \sum \cap \partial B \) . Note that if this intersection is empty, then \( B \) is contained in one of the two components of \( {S}^{3} - \sum \), and the result follows at once.
As \( \sum \cap K \) is two points, any oriented simple closed curve in \( \sum - K \) has linking number zero or \( \pm 1 \) with \( K \) . Amongst the components of \( \sum \cap \partial B \) that have zero linking number with \( K \) select a component that is innermost on \( \sum \) (with \( \sum \cap K \) considered "outside"). This component bounds a disc \( D \subset \sum \), with \( D \cap \partial B = \partial D \) . Now \( \partial D \) bounds a disc \( {D}^{\prime } \subset \partial B \) with \( {D}^{\prime } \cap K = \varnothing \) (by linking numbers), though \( {D}^{\prime } \cap \sum \) may have many components (see Figure 2.5). By the Schönflies theorem, the sphere \( D \cup {D}^{\prime } \) bounds a ball. "Moving" \( {D}^{\prime } \) across this ball to just the other side of \( D \) changes \( B \) to a new position, with \( \sum \cap \partial B \) now having fewer components than before. As the new position of \( B \) differs from the old by the addition or subtraction of a ball disjoint from \( K \), the new \( \left( {B,\alpha }\right) \) pair corresponds to \( P \) exactly as before. After repetition of this procedure, it may be assumed that each component of \( \sum \cap \partial B \) has linking number \( \pm 1 \) with \( K \) . (Thus, on each of the spheres \( \sum \) and \( \partial B \) ,
the components of \( \sum \cap \partial B \) look like lines of latitude encircling, as the two poles, the two intersection points with \( \mathrm{K} \) .)

Figure 2.5
If now \( \sum \cap B \) has a component that is a disc \( D \), then \( D \cap K \) is one point, and as \( P \) is prime, one side of \( D \) in \( B \) is a trivial ball-arc pair (see Figure 2.5). Removing from \( B \) (a regular neighbourhood of) this trivial pair produces a new \( B \) with the same properties as before but having fewer components of \( \sum \cap B \) . Thus it may be assumed that every component of \( \sum \cap B \) is an annulus.
Let \( A \) be an annulus component of \( \sum \cap B \) . Then \( \partial A \) bounds an annulus \( {A}^{\prime } \) in \( \partial B \) and \( A \) may be chosen (furthest from \( \alpha \) ) so that \( {A}^{\prime } \cap \sum = \partial {A}^{\prime } \) . Let \( M \) be the part of \( B \) bounded by the torus \( A \cup {A}^{\prime } \) and otherwise disjoint from \( \sum \cup \partial B \) . Let \( \Delta \) be the closure of one of the components of \( \partial B - {A}^{\prime } \) . Then \( \Delta \) is a disc, with \( \partial \Delta \) one of the components of \( {A}^{\prime } \), and \( \Delta \cap K \) equal to a single point (though \( \Delta \cap \sum \) may have many components). This is illustrated schematically in Figure 2.6. Let \( N\left( \Delta \right) \) be a small regular neighbourhood of \( \Delta \) in the closure of \( B - M \) . This should be thought of as a thickening of \( \Delta \) into \( B - M \) . The pair \( \left( {N\left( \Delta \right), N\left( \Delta \right) \cap \alpha }\right) \) is a trivial ball-arc pair. However, \( M \cup N\left( \Delta \right) \) is a ball, because its boundary is a sphere, and the fact that \( P \) is prime implies that the ball-arc pair \( \left( {M \cup N\left( \Delta \right), N\left( \Delta \right) \cap \alpha }\right) \) is either trivial or a copy of the pair \( \left( {B,\alpha }\right) \) . If it is trivial (that is, when \( M \) is a solid torus), \( B \) may be changed, as before, by removing (a neighbourhood of) this pair to give a new \( B \) with fewer components of \( \sum \cap B \) . Otherwise, \( M \) is a copy of \( B \) less a neighbourhood of \( \alpha \), and that is just the exterior of the knot \( P;\partial \Delta \) corresponds to a meridian of \( P \) . The closure of one of the complementary domains of \( \sum \) in \( {S}^{3} \) ,

Figure 2.6
say that corresponding to \( {K}_{1} \), contains \( M \), and \( M \cap \sum = A \) . The meridian \( \partial \Delta \) bounds a disc in \( \sum - A \) that meets \( K \) at one point. This means that \( P \) is a summand of \( {K}_{1} \) as required, so \( {K}_{1} = P + {K}_{1}^{\prime } \) for some \( {K}_{1}^{\prime } \) .
In this last circumstance, remove the interior of \( M \) and replace it with a solid torus \( {S}^{1} \times {D}^{2} \) . Glue the boundary of the solid torus to \( \partial M \), and ensure that the boundary of any meridional disc of \( {S}^{1} \times {D}^{2} \) is identified with a curve on \( \partial M \) that cuts \( \partial \Delta \) at one point. Then \( \left( {{S}^{1} \times {D}^{2}}\right) \cup N\left( \Delta \right) \) is a ball, so \( B \) has been changed to become a new ball \( {B}^{\prime } \), and \( \left( {{B}^{\prime },\alpha }\right) \) is a trivial ball-arc pair. The closure of \( {S}^{3} - B \) is unchanged; it is still a ball, so \( {S}^{3} \) is changed to a new copy of \( {S}^{3} \) . In that new copy, the knot has become \( Q \) and, viewed as being decomposed by \( \sum \), it has become \( {K}_{1}^{\prime } + {K}_{2} \) . Thus \( Q = {K}_{1}^{\prime } + {K}_{2} \) .
Corollary 2.11. Suppose that \( P \) is a prime knot and that \( P + Q = {K}_{1} + {K}_{2} \) . Suppose also that \( P = {K}_{1} \) . Then \( Q = {K}_{2} \) .
Proof. By Theorem 2.10, there are two possibilities. The first is that for some \( {K}_{1}^{\prime }, P + {K}_{1}^{\prime } = {K}_{1} = P \) and \( Q = {K}_{1}^{\prime } + {K}_{2} \) . But then the genus of \( {K}_{1}^{\prime } \) must be zero, so \( {K}_{1}^{\prime } \) is the unknot and so \( Q = {K}_{2} \) . The second possibility is that for some \( {K}_{2}^{\prime }, P + {K}_{2}^{\prime } = {K}_{2} \) and \( Q = {K}_{2}^{\prime } + {K}_{1} \) . But then \( Q = {K}_{2}^{\prime } + P = {K}_{2} \) .
Theorem 2.12. Up to ordering of summands, there is a unique expression for a knot \( K \) as a finite sum of prime knots.
Proof. Suppose \( K = {P}_{1} + {P}_{2} + \cdots + {P}_{m} = {Q}_{1} + {Q}_{2} + \cdots + {Q}_{n} \), where the \( {P}_{i} \) and \( {Q}_{i} \) are all prime. By the theorem, \( {P}_{1} \) is a summand of \( {Q}_{1} \) or of \( {Q}_{2} + \) \( {Q}_{3} + \cdots + {Q}_{n} \), and if the latter, then it is a summand of one of the \( {Q}_{j} \) for \( j \geq 2 \) , by induction on \( n \) . Of course if \( {P}_{1} \) is a summand of \( {Q}_{j} \), then \( {P}_{1} = {Q}_{j} \) . By the corollary, \( {P}_{1} \) and \( {Q}_{j} \) may then be cancelled from both sides of the equation, and the result follows by induction on \( m \) . Note that this induction starts when \( m = 0 \) . Then \( n = 0 \) because the unknot cannot be expressed as a sum of non-trivial knots (again by consideration of genus).
The theorems of this chapter are intended to make it reasonable to restrict attention to prime knots in most circumstances. Certainly that is the tradition when considering knot tabulation.
## Exercises
1. Prove that a non-trivial torus knot is prime by considering the way in which a 2-sphere,
|
Theorem 2.9. Schönflies Theorem. Let \( e : {S}^{2} \rightarrow {S}^{3} \) be any piecewise linear embedding. Then \( {S}^{3} - e{S}^{2} \) has two components, the closure of each of which is a piecewise linear ball.
|
null
|
Lemma 1.7.3 Suppose that \( X \) and \( Y \) are graphs with minimum valency four. Then \( X \cong Y \) if and only if \( L\left( X\right) \cong L\left( Y\right) \) .
Proof. Let \( C \) be a clique in \( L\left( X\right) \) containing exactly \( c \) vertices. If \( c > 3 \) , then the vertices of \( C \) correspond to a set of \( c \) edges in \( X \), meeting at a common vertex. Consequently, there is a bijection between the vertices of \( X \) and the maximal cliques of \( L\left( X\right) \) that takes adjacent vertices to pairs of cliques with a vertex in common. The remaining details are left as an exercise.
There is another interesting characterization of line graphs:
Theorem 1.7.4 A graph \( X \) is a line graph if and only if each induced subgraph of \( X \) on at most six vertices is a line graph.
Consider the set of graphs \( X \) such that
(a) \( X \) is not a line graph, and
(b) every proper induced subgraph of \( X \) is a line graph.
The previous theorem implies that this set is finite, and in fact there are exactly nine graphs in this set. (The notes at the end of the chapter indicate where you can find the graphs themselves.)
We call a bipartite graph semiregular if it has a proper 2-colouring such that all vertices with the same colour have the same valency. The cheapest examples are the complete bipartite graphs \( {K}_{m, n} \) which consist of an independent set of \( m \) vertices completely joined to an independent set of \( n \) vertices.
Lemma 1.7.5 If the line graph of a connected graph \( X \) is regular, then \( X \) is regular or bipartite and semiregular.
Proof. Suppose that \( L\left( X\right) \) is regular with valency \( k \) . If \( u \) and \( v \) are adjacent vertices in \( X \), then their valencies sum to \( k + 2 \) . Consequently, all neighbours of a vertex \( u \) have the same valency, and so if two vertices of \( X \) share a common neighbour, then they have the same valency. Since \( X \) is connected, this implies that there are at most two different valencies.
If two adjacent vertices have the same valency, then an easy induction argument shows that \( X \) is regular. If \( X \) contains a cycle of odd length, then it must have two adjacent vertices of the same valency, and so if it is not regular, then it has no cycles of odd length. We leave it as an exercise to show that a graph is bipartite if and only if it contains no cycles of odd length.
## 1.8 Planar Graphs
We have already seen that graphs can conveniently be given by drawings where each vertex is represented by a point and each edge \( {uv} \) by a line connecting \( u \) and \( v \) . A graph is called planar if it can be drawn without crossing edges.
Although this definition is intuitively clear, it is topologically imprecise. To make it precise, consider a function that maps each vertex of a graph \( X \) to a distinct point of the plane, and each edge of \( X \) to a continuous non

Figure 1.10. Planar graphs \( {K}_{4} \) and the octahedron
self-intersecting curve in the plane joining its endpoints. Such a function is called a planar embedding if the curves corresponding to nonincident edges do not meet, and the curves corresponding to incident edges meet only at the point representing their common vertex. A graph is planar if and only if it has a planar embedding. Figure 1.10 shows two planar graphs: the complete graph \( {K}_{4} \) and the octahedron.
A plane graph is a planar graph together with a fixed embedding. The edges of the graph divide the plane into regions called the faces of the plane graph. All but one of these regions is bounded, with the unbounded region called the infinite or external face. The length of a face is the number of edges bounding it.
Euler's famous formula gives the relationship between the number of vertices, edges, and faces of a connected plane graph.
Theorem 1.8.1 (Euler) If a connected plane graph has \( n \) vertices, e edges and \( f \) faces, then
\[
n - e + f = 2\text{.}
\]
A maximal planar graph is a planar graph \( X \) such that the graph formed by adding an edge between any two nonadjacent vertices of \( X \) is not planar. If an embedding of a planar graph has a face of length greater than three, then an edge can be added between two vertices of that face. Therefore, in any embedding of a maximal planar graph, every face is a triangle. Since each edge lies in two faces, we have
\[
{2e} = {3f}
\]
and so by Euler's formula,
\[
e = {3n} - 6.
\]
A planar graph on \( n \) vertices with \( {3n} - 6 \) edges is necessarily maximal; such graphs are called planar triangulations . Both the graphs of Figure 1.10 are planar triangulations.
A planar graph can be embedded into the plane in infinitely many ways. The two embeddings of Figure 1.11 are easily seen to be combinatorially different: the first has faces of length \( 3,3,4 \), and 6 while the second has faces of lengths \( 3,3,5 \), and 5 . It is an important result of topological graph
theory that a 3-connected graph has essentially a unique embedding. (See Section 3.4 for the explanation of what a 3-connected graph is.)

Figure 1.11. Two plane graphs
Given a plane graph \( X \), we can form another plane graph called the dual graph \( {X}^{ * } \) . The vertices of \( {X}^{ * } \) correspond to the faces of \( X \), with each vertex being placed in the corresponding face. Every edge \( e \) of \( X \) gives rise to an edge of \( {X}^{ * } \) joining the two faces of \( X \) that contain \( e \) (see Figure 1.12).
Notice that two faces of \( X \) may share more than one common edge, in which case the graph \( {X}^{ * } \) may contain multiple edges, meaning that two vertices are joined by more than one edge. This requires the obvious generalization to our definition of a graph, but otherwise causes no difficulties. Once again, explicit warning will be given when it is necessary to consider graphs with multiple edges.
Since each face in a planar triangulation is a triangle, its dual is a cubic graph. Considering the graphs of Figure 1.10, it is easy to check that \( {K}_{4} \) is isomorphic to its dual; such graphs are called self-dual. The dual of the octahedron is a bipartite cubic graph on eight vertices known as the cube, which we will discuss further in Section 3.1.

Figure 1.12. The planar dual
As defined above, the planar dual of any graph \( X \) is connected, so if \( X \) is not connected, then \( {\left( {X}^{ * }\right) }^{ * } \) is not isomorphic to \( X \) . However, this is the only difficulty, and it can be shown that if \( X \) is connected, then \( {\left( {X}^{ * }\right) }^{ * } \) is isomorphic to \( X \) .
The notion of embedding a graph in the plane can be generalized directly to embedding a graph in any surface. The dual of a graph embedded in any surface is defined analogously to the planar dual.
The real projective plane is a nonorientable surface, which can be represented on paper by a circle with diametrically opposed points identified. The complete graph \( {K}_{6} \) is not planar, but it can be embedded in the projective plane, as shown in Figure 1.13. This embedding of \( {K}_{6} \) is a triangulation in the projective plane, so its dual is a cubic graph, which turns out to be the Petersen graph.

Figure 1.13. An embedding of \( {K}_{6} \) in the projective plane
The torus is an orientable surface, which can be represented physically in Euclidean 3-space by the surface of a torus, or doughnut. It can be represented on paper by a rectangle where the points on the bottom side are identified with the points directly above them on the top side, and the points of the left side are identified with the points directly to the right of them on the right side. The complete graph \( {K}_{7} \) is not planar, nor can it be embedded in the projective plane, but it can be embedded in the torus as shown in Figure 1.14 (note that due to the identification the four "corners" are actually the same point). This is another triangulation; its dual is a cubic graph known as the Heawood graph, which is discussed in Section 5.10.

Figure 1.14. An embedding of \( {K}_{7} \) in the torus
## Exercises
1. Let \( X \) be a graph with \( n \) vertices. Show that \( X \) is complete or empty if and only if every transposition of \( \{ 1,\ldots, n\} \) belongs to \( \operatorname{Aut}\left( X\right) \) .
2. Show that \( X \) and \( \bar{X} \) have the same automorphism group, for any graph \( X \) .
3. Show that if \( x \) and \( y \) are vertices in the graph \( X \) and \( g \in \operatorname{Aut}\left( X\right) \) , then the distance between \( x \) and \( y \) in \( X \) is equal to the distance between \( {x}^{g} \) and \( {y}^{g} \) in \( X \) .
4. Show that if \( f \) is a homomorphism from the graph \( X \) to the graph \( Y \) and \( {x}_{1} \) and \( {x}_{2} \) are vertices in \( X \), then
\[
{d}_{X}\left( {{x}_{1},{x}_{2}}\right) \geq {d}_{Y}\left( {f\left( {x}_{1}\right), f\left( {x}_{2}\right) }\right)
\]
5. Show that if \( Y \) is a subgraph of \( X \) and \( f \) is a homomorphism from \( X \) to \( Y \) such that \( f \upharpoonright Y \) is a bijection, then \( Y \) is a retract.
6. Show that a retract \( Y \) of \( X \) is an induced subgraph of \( X \) . Then show that it is isometric, that is, if \( x \) and \( y \) are vertices of \( Y \), then \( {d}_{X}\left( {x, y}\right) = {d}_{Y}\left( {x, y}\right) \)
7. Show that any edge in
|
Lemma 1.7.3 Suppose that \( X \) and \( Y \) are graphs with minimum valency four. Then \( X \cong Y \) if and only if \( L\left( X\right) \cong L\left( Y\right) \) .
|
Let \( C \) be a clique in \( L\left( X\right) \) containing exactly \( c \) vertices. If \( c > 3 \) , then the vertices of \( C \) correspond to a set of \( c \) edges in \( X \), meeting at a common vertex. Consequently, there is a bijection between the vertices of \( X \) and the maximal cliques of \( L\left( X\right) \) that takes adjacent vertices to pairs of cliques with a vertex in common. The remaining details are left as an exercise.
|
Corollary 1.26. An intersection of closed cells is a closed cell.
We turn, finally, to the geometric meaning of the face relation. If one visualizes a cell \( A \) in dimension 2 or 3, one sees easily what its faces are, without knowing the particular system of equalities and inequalities by which \( A \) was defined. Roughly speaking, the faces are the flat pieces into which the boundary of \( A \) decomposes. The following proposition states this precisely:
Proposition 1.27. Let \( A \) be a cell. Then two distinct points \( y, z \in \bar{A} \) lie in the same face of \( A \) if and only if there is an open line segment containing both \( y \) and \( z \) and lying entirely in \( \bar{A} \) . Consequently, the partition of \( \bar{A} \) into faces depends only on \( A \) as a subset of \( V \), and not on the arrangement \( \mathcal{H} \) .
Proof. Suppose \( y \) and \( z \) are in the same face \( B \leq A \) . For each condition \( {f}_{i} = {\sigma }_{i} \) in the description of \( B \), we can extend the segment \( \left\lbrack {y, z}\right\rbrack \) slightly in both directions without violating the condition. Since there are only finitely many such conditions, it follows that \( B \) contains an open segment containing \( y \) and \( z \) ; hence so does \( \bar{A} \) .
Suppose now that \( y \) and \( z \) are in different faces of \( A \) . Then there is some \( i \) such that \( y \) and \( z \) behave differently with respect to \( {f}_{i} \), say \( {f}_{i}\left( y\right) > 0 \) and \( {f}_{i}\left( z\right) = 0 \) . If we now continue the segment \( \left\lbrack {y, z}\right\rbrack \) past \( z \), we immediately have \( {f}_{i} < 0 \), so we leave \( \bar{A} \) ; hence there is no open segment in \( \bar{A} \) containing both \( y \) and \( z \) .
The significance of this for us is that if we want to understand the polyhedral structure of a particular cell \( A \), then we can replace \( \mathcal{H} \) by any other hyperplane arrangement for which \( A \) is still a cell. We record this for future reference:
Corollary 1.28. Let \( A \) be a cell with respect to \( \mathcal{H} \) . If \( A \) is also a cell with respect to an arrangement \( {\mathcal{H}}^{\prime } \), then the faces of \( A \) defined using \( {\mathcal{H}}^{\prime } \) are the same as those defined using \( \mathcal{H} \) .
In practice, we will want to take a minimal set of hyperplanes for a given \( A \) . In the next subsection we spell out exactly how to do this in case \( A \) is a chamber.
Exercise 1.29. Given \( A \in \sum \), show that \( \mathop{\bigcup }\limits_{{B > A}}B \) is a convex open subset of \( V \) . [Suggestion: First draw a picture to see why this is plausible.]
## 1.4.3 Panels and Walls
Definition 1.30. A cell \( A \) with exactly one 0 in its sign sequence is called a panel. This is equivalent to saying that \( \operatorname{supp}A \) is a hyperplane, which is then necessarily in \( \mathcal{H} \) . If the panel \( A \) is a face of a chamber \( C \), then we will also say that \( A \) is a panel of \( C \) and that its support hyperplane \( H \) is a wall of \( C \) .
In low-dimensional examples like the one in Figure 1.4, one sees easily that every chamber is defined by the inequalities corresponding to its walls; the other inequalities are redundant. We will show that this is always the case. Fix a chamber \( C \) . We say that \( C \) is defined by a subset \( {\mathcal{H}}^{\prime } \subseteq \mathcal{H} \) if \( C \) is defined by the conditions \( {f}_{i} = {\sigma }_{i} \), where \( i \) ranges over the indices such that \( {H}_{i} \in {\mathcal{H}}^{\prime } \) .
Lemma 1.31. If \( H \in \mathcal{H} \) is not a wall of \( C \), then \( C \) is defined by \( {\mathcal{H}}^{\prime } \mathrel{\text{:=}} \) \( \mathcal{H} \smallsetminus \{ H\} \) .
Proof. Assume, to simplify the notation, that \( C \) is defined by the inequalities \( {f}_{i} > 0 \) for all \( i \), and let \( j \) be the index such that \( H = {H}_{j} \) . Suppose \( C \) is not defined by \( {\mathcal{H}}^{\prime } \) . Then removing the inequality \( {f}_{j} > 0 \) results in a set \( {C}^{\prime } \) strictly bigger than \( C \) . Choose \( y \in {C}^{\prime } \smallsetminus C \) and \( x \in C \) . Since \( {f}_{j}\left( x\right) > 0 \) and \( {f}_{j}\left( y\right) \leq 0 \) , there is a point \( z \in (x, y\rbrack \) such that \( {f}_{j}\left( z\right) = 0 \) . This point \( z \) is then in a panel \( A \) of \( C \) supported by \( H \), so \( H \) is a wall of \( C \) .
Proposition 1.32. Let \( C \) be a chamber and let \( {\mathcal{H}}_{C} \) be its set of walls. Then \( C \) is defined by \( {\mathcal{H}}_{C} \), and \( {\mathcal{H}}_{C} \) is the smallest subset of \( \mathcal{H} \) with this property.
Proof. If \( C \) is defined by \( {\mathcal{H}}^{\prime } \subseteq \mathcal{H} \), then we can use \( {\mathcal{H}}^{\prime } \) to determine the walls of \( C \) by Corollary 1.28; hence \( {\mathcal{H}}^{\prime } \supseteq {\mathcal{H}}_{C} \) . It remains to show that \( C \) is defined by \( {\mathcal{H}}_{C} \) . If \( \mathcal{H} \) contains any \( H \) that is not a wall of \( C \), then we can remove it by Lemma 1.31 to get a smaller defining set \( {\mathcal{H}}^{\prime } \) . Now \( C \) is still a chamber with respect to \( {\mathcal{H}}^{\prime } \), and replacing \( \mathcal{H} \) by \( {\mathcal{H}}^{\prime } \) does not change the walls. So we may repeat the process to remove another nonwall, and so on. Since \( \mathcal{H} \) is finite, we arrive at \( {\mathcal{H}}_{C} \) after finitely many steps.
The proof we just gave made crucial use of the fact that the notion of "wall" does not depend on the particular defining set of hyperplanes. Here is a simple intrinsic characterization of the walls:
Proposition 1.33. Let \( C \) be a chamber and let \( H \) be a linear hyperplane in \( V \) . Then \( H \) is a wall of \( C \) if and only if \( C \) lies on one side of \( H \) and \( \bar{C} \cap H \) has nonempty interior in \( H \) .
Proof. If \( H \) is the support of a panel \( A \) of \( C \), then certainly \( C \) lies on one side of \( H \) and \( \bar{C} \cap H \) contains \( A \), which is a nonempty open subset of \( H \) . Conversely, suppose \( H \) is a hyperplane such that \( C \) lies on one side of \( H \) and \( \bar{C} \cap H \) has nonempty interior in \( H \) . Then \( C \) is still a chamber with respect to \( {\mathcal{H}}^{ + } \mathrel{\text{:=}} \mathcal{H} \cup \{ H\} \), so we can use \( {\mathcal{H}}^{ + } \) to determine the faces of \( C \) . By Proposition 1.25, \( \bar{C} \cap H \) is a closed cell \( \bar{A} \) with respect to \( {\mathcal{H}}^{ + } \), and the corresponding open cell \( A \) is a face of \( C \) because \( \bar{A} \subseteq \bar{C} \) . Since \( \bar{A} \) is contained in \( H \) and has nonempty interior in \( H \), the support of \( A \) must be \( H \) . Thus \( A \) is a panel of \( C \) and its support \( H \) is therefore a wall of \( C \) .
## Exercises
1.34. This exercise outlines a more direct proof that any chamber is defined by its walls; fill in the missing details.
Let \( C \) and \( {\mathcal{H}}_{C} \) be as in Proposition 1.32, and let \( {C}^{\prime } \) be the \( {\mathcal{H}}_{C} \) -chamber containing \( C \) . Suppose \( {C}^{\prime } \neq C \) . Choose \( y \in {C}^{\prime } \smallsetminus C \) and \( x \in C \), and consider the line segment \( \left\lbrack {x, y}\right\rbrack \) . By moving \( y \) slightly if necessary, we may assume \( y \notin \bar{C} \), so that the segment \( \left\lbrack {x, y}\right\rbrack \) crosses at least one \( H \in \mathcal{H} \) . And by moving \( x \) slightly if necessary, we may assume that the segment never crosses more than one \( H \) at a time. The first \( H \) that is crossed as we traverse the segment starting at \( x \) is then a wall of \( C \), contradicting the definition of \( C \) .
1.35. Assume that \( \mathcal{H} \) is essential, as defined in Remark 1.22. Show that every closed cell \( \bar{A} \) is the closed convex cone generated by the 1-dimensional faces of \( A \), i.e., every \( x \in \bar{A} \) can be expressed as \( x = \mathop{\sum }\limits_{{k = 1}}^{m}{y}_{j} \), where each \( {y}_{k} \) is in a 1-dimensional face of \( A \) . [Note: These 1-dimensional faces are rays. They therefore correspond to vertices if we think of cells in terms of their intersections with a sphere as in Remark 1.22.]
## 1.4.4 Simplicial Cones
Let \( C \) be a fixed but arbitrary chamber and let \( {\mathcal{H}}^{\prime } \) be its set of walls. It will be convenient to take the index set \( I \) for \( \mathcal{H} \) to be \( \{ 1,2,\ldots, m\} \) for some \( m \) . For simplicity of notation we will assume that the elements of \( {\mathcal{H}}^{\prime } \) are the hyperplanes \( {f}_{i} = 0 \) for \( 1 \leq i \leq r \) and that \( {f}_{i} > 0 \) on \( C \) for \( 1 \leq i \leq m \) .
Let \( {V}_{0} \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{i = 1}}^{m}{H}_{i} \) . We call \( \mathcal{H} \) essential if \( {V}_{0} = 0 \) . There is no loss of generality in restricting attention to the essential case. For if we set \( {V}_{1} \mathrel{\text{:=}} V/{V}_{0} \) , then the linear functions \( {f}_{i} \) pass to the quotient \( {V}_{1} \) and define an essential set of hyperplanes there. And the cells determined by these hyperplanes in \( {V}_{1} \) are in 1-1 correspondence with the cells in \( V \) . More precisely, the cells in \( V \) are the inverse images in \( V \) of the cells in \( {V}_{1} \) . [Geometrically, then, the cells in \( V \) are simply the cells in \( {V}_{1} \) "fattened up" by a factor \( {\mathbb{R}}^{d} \), where \( d \mathrel{\text{:=}} \dim {V}_{0} \) .]
Note that \( {V}_{0} \) is itself a cell, with sign sequence \( \left( {0,0,\ldots ,0}\right) \) . It is the smallest cell, in the sense that it is a face of every cell, so \( \mathcal{H} \) is essenti
|
Corollary 1.26. An intersection of closed cells is a closed cell.
|
null
|
Example 2.14. The LAPLACE transform of a function \( f \) is defined to be another function \( \widetilde{f} \), given by
\[
\widetilde{f}\left( s\right) = {\int }_{0}^{\infty }f\left( t\right) {e}^{-{st}}{dt}
\]
for all \( s \) such that the integral is convergent (see Chapter 3). The Laplace transform of \( \delta \) cannot be defined in this way. We can, however, modify the definition so as to include the origin. It is indeed customary to write
\[
\widetilde{f}\left( s\right) = {\int }_{0 - }^{\infty }f\left( t\right) {e}^{-{st}}{dt} = \mathop{\lim }\limits_{{k \nearrow 0}}{\int }_{k}^{\infty }f\left( t\right) {e}^{-{st}}{dt}.
\]
With this definition one finds that \( \widetilde{\delta }\left( s\right) = 1 \) for all \( s \) . Similarly, \( {\widetilde{\delta }}_{a}\left( s\right) = {e}^{-{as}} \) , if \( a > 0 \) .
The HEAVISIDE function, or unit step function, \( H \) is defined by
\[
H\left( t\right) = \left\{ \begin{array}{l} 0\text{ for }t < 0 \\ 1\text{ for }t > 0 \end{array}\right.
\]
The value of \( H\left( 0\right) \) is mostly left undefined, because it is normally of no importance. The Heaviside function is useful in many contexts. One of these is when we are dealing with functions that are given by different formulae in different intervals.
If \( a < b \), the expression \( H\left( {t - a}\right) - H\left( {t - b}\right) \) is equal to 1 for \( a < t < b \) and equal to 0 outside the interval \( \left\lbrack {a, b}\right\rbrack \) . It might be called a "window" that lights up the interval \( \left( {a, b}\right) \) (we do not in these situations care much about whether an interval is open or closed). For unbounded intervals we can also find "windows": the function \( H\left( {t - a}\right) \) lights up the interval \( \left( {a,\infty }\right) \), and the expression \( 1 - H\left( {t - b}\right) \) the interval \( \left( {-\infty, b}\right) \) .
Example 2.15. Consider the function \( f : \mathbf{R} \rightarrow \mathbf{R} \) that is given by
\[
f\left( t\right) = \left\{ \begin{array}{l} 1 - {t}^{2}\text{ for }t < - 2 \\ t + 2\text{ for } - 2 < t < 1 \\ 1 - t\text{ for }t > 1 \end{array}\right.
\]
This can now be compressed into one formula:
\( f\left( t\right) \)
\[
= \left( {1 - {t}^{2}}\right) \left( {1 - H\left( {t + 2}\right) }\right) + \left( {t + 2}\right) \left( {H\left( {t + 2}\right) - H\left( {t - 1}\right) }\right) + \left( {1 - t}\right) H\left( {t - 1}\right)
\]
\[
= \left( {1 - {t}^{2}}\right) + \left( {-1 + {t}^{2} + t + 2}\right) H\left( {t + 2}\right) + \left( {-t - 2 + 1 - t}\right) H\left( {t - 1}\right)
\]
\[
= 1 - {t}^{2} + \left( {{t}^{2} + t + 1}\right) H\left( {t + 2}\right) - \left( {{2t} + 1}\right) H\left( {t - 1}\right) .
\]
Heaviside’s function is connected with the \( \delta \) function via the formula
\[
H\left( t\right) = {\int }_{-\infty }^{t}\delta \left( u\right) {du}
\]
A very bold differentiation of this formula would give the result
\[
{H}^{\prime }\left( t\right) = \delta \left( t\right)
\]
(2.5)
Since \( H \) is constant on the intervals \( \rbrack - \infty ,0\left\lbrack \text{and}\right\rbrack 0,\infty \lbrack \), and \( \delta \left( t\right) \) is considered to be zero on these intervals, the formula (2.5) is reasonable for \( t \neq 0 \) . What is new is that the "derivative" of the jump discontinuity of \( H \) should be considered to be the "pulse" of \( \delta \) . In fact, this assertion can be given a completely coherent background; this will be done in Chapter 8.
If \( \varphi \) is a function in the class \( {C}^{1} \), i.e., it has a continuous derivative, and if in addition \( \varphi \) is zero outside some finite interval, the following calculation is clear:
\[
{\int }_{-\infty }^{\infty }{\varphi }^{\prime }\left( t\right) H\left( t\right) {dt} = {\int }_{0}^{\infty }{\varphi }^{\prime }\left( t\right) {dt} = {\left\lbrack \varphi \left( t\right) \right\rbrack }_{t = 0}^{\infty } = 0 - \varphi \left( 0\right) = - \varphi \left( 0\right) .
\]
The same result can also be obtained by the following formal integration by parts:
\[
{\int }_{-\infty }^{\infty }{\varphi }^{\prime }\left( t\right) H\left( t\right) {dt} = {\left\lbrack \varphi \left( t\right) H\left( t\right) \right\rbrack }_{-\infty }^{\infty } - {\int }_{-\infty }^{\infty }\varphi \left( t\right) {H}^{\prime }\left( t\right) {dt}
\]
\[
= \left( {0 - 0}\right) - {\int }_{-\infty }^{\infty }\varphi \left( t\right) \delta \left( t\right) {dt} = - \varphi \left( 0\right) .
\]
This is characteristic of the way in which these generalized functions can be treated: if they occur in an integral together with an "ordinary" function of sufficient regularity, this integral can be treated formally, and the results will be true facts.
One can go further and introduce derivatives of the \( \delta \) functions. What would be, for example, the first derivative of \( \delta = {\delta }_{0} \) ? One way of finding out is by operating formally as in the preceding situation. Let \( \varphi \) be a function in \( {C}^{1} \), and let it be understood that all integrals are taken over an interval that contains 0 in its interior. Since \( \delta \left( t\right) = 0 \) if \( t \neq 0 \), it is reasonable that also \( {\delta }^{\prime }\left( t\right) = 0 \) for \( t \neq 0 \) . Integration by parts gives
\[
{\int }_{a}^{b}{\delta }^{\prime }\left( t\right) \varphi \left( t\right) {dt} = {\left\lbrack \delta \left( t\right) \varphi \left( t\right) \right\rbrack }_{a}^{b} - {\int }_{a}^{b}\delta \left( t\right) {\varphi }^{\prime }\left( t\right) {dt} = \left( {0 - 0}\right) - {\varphi }^{\prime }\left( 0\right) = - {\varphi }^{\prime }\left( 0\right) .
\]
If \( \delta \) itself serves to pick out the value of a function at the origin, the derivative of \( \delta \) can thus be used to find the value at the same place of the derivative of a function.
Another way of seeing \( {\delta }^{\prime } \) is to consider \( \delta \) to be the limit of a differentiable positive summation kernel, and taking the derivative of the kernel. An example is actually given in Exercise 2.20. As in Example 2.8 on page 23, we study the summation kernel
\[
{K}_{n}\left( t\right) = \frac{n}{\sqrt{2\pi }}{e}^{-{n}^{2}{t}^{2}/2}
\]
(which consists in rescaling the normal probability density function). The
derivatives are
\[
{K}_{n}{}^{\prime }\left( t\right) = - \frac{{n}^{3}t}{\sqrt{2\pi }}{e}^{-{n}^{2}{t}^{2}/2}.
\]

FIGURE 2.2. 
FIGURE 2.3.
These are illustrated in Figure 2.2. The fact that they approach \( - {\delta }^{\prime }\left( t\right) \) is proved by integration by parts (which is what Exercise 2.20 is all about).
In the theory of electricity, there occurs a phenomenon known as an electric dipole. This consists of two equal but opposite charges \( \pm q \) at a small distance from each other (see Figure 2.3). If the distance is made smaller and charges increase in proportion to the inverse of the distance, the "limit object" is an idealized dipole. A mathematical model of this object consists of \( {\delta }^{\prime } \), just as a a point charge can be represented by \( \delta \) .
Higher derivatives of \( \delta \) can also be defined. Using integration by parts one finds that the \( n \) th derivative \( {\delta }^{\left( n\right) } \) should act according to the formula
\[
\int {\delta }^{\left( n\right) }\left( t\right) \varphi \left( t\right) {dt} = {\left( -1\right) }^{n}{\varphi }^{\left( n\right) }\left( 0\right)
\]
provided the function \( \varphi \) has an \( n \) th derivative that is continuous at the origin.
## Exercises
2.22 Compute the following integrals (taken over the entire real axis if nothing else is indicated):
(a) \( \int \left( {{t}^{2} + {3t}}\right) \left( {\delta \left( t\right) - \delta \left( {t + 2}\right) }\right) {dt}\; \) (b) \( {\int }_{0}^{\infty }{e}^{-{st}}{\delta }^{\prime }\left( {t - 1}\right) {dt} \)
(c) \( \int {e}^{2t}{\delta }^{\prime }\left( t\right) {dt}\; \) (d) \( {\int }_{0 - }^{\infty }{\delta }^{\left( n\right) }\left( t\right) {e}^{-{st}}{dt} \)
2.23 What should be meant by \( \delta \left( {2t}\right) \), expressed using \( \delta \left( t\right) \) ? Investigate this by manipulating \( \int \varphi \left( t\right) \delta \left( {2t}\right) {dt} \) in a suitable way. Generalize to \( \delta \left( {at}\right), a \neq 0 \) . (The cases \( a > 0 \) and \( a < 0 \) should be considered separately.)
2.24 Rewrite, using Heaviside windows, the expressions \( {f}_{1}\left( t\right) = t\left| {t + 1}\right| ,{f}_{2}\left( t\right) = \) \( {e}^{-\left| t\right| },{f}_{3}\left( t\right) = \operatorname{sgn}t = t/\left| t\right| \left( {t \neq 0}\right) ,{f}_{4}\left( t\right) = A \) if \( t < a, = B \) if \( t > a \) .
## 2.7 *Computing with \( \delta \)
We shall now show how one can solve certain problems involving the \( \delta \) distribution and its derivatives.
The ordinary rules for computing with derivatives will still hold true. (We cannot really prove this at the present stage.) For example, the rule for differentiating a product is valid: \( {\left( fg\right) }^{\prime } = {f}^{\prime }g + f{g}^{\prime } \) .
Example 2.16. If \( \chi \) is a function that is continuous at \( a \), what should be meant by the product \( \chi \left( t\right) {\delta }_{a}\left( t\right) \) ? Since \( {\delta }_{a}\left( t\right) \) is "zero" except for at \( t = a \), it can be expected that the values of \( \chi \left( t\right) \) for \( t \neq a \) should not really matter. And we can write as follows:
\[
\int \left( {\chi \left( t\right) {\delta }_{a}\left( t\right) }\right) \varphi \left( t\right) {dt} = \int {\delta }_{a}\left( t\right) \left( {\chi \left( t\right) \varphi \left( t\right) }\right) {dt} = \chi \left( a\righ
|
Example 2.14. The LAPLACE transform of a function \( f \) is defined to be another function \( \widetilde{f} \), given by
\[
\widetilde{f}\left( s\right) = {\int }_{0}^{\infty }f\left( t\right) {e}^{-{st}}{dt}
\]
for all \( s \) such that the integral is convergent (see Chapter 3). The Laplace transform of \( \delta \) cannot be defined in this way. We can, however, modify the definition so as to include the origin. It is indeed customary to write
\[
\widetilde{f}\left( s\right) = {\int }_{0 - }^{\infty }f\left( t\right) {e}^{-{st}}{dt} = \mathop{\lim }\limits_{{k \nearrow 0}}{\int }_{k}^{\infty }f\left( t\right) {e}^{-{st}}{dt}.
\]
With this definition one finds that \( \widetilde{\delta }\left( s\right) = 1 \) for all \( s \) . Similarly, \( {\widetilde{\delta }}_{a}\left( s\right) = {e}^{-{as}} \) , if \( a > 0 \) .
|
null
|
Lemma 1.5. Let \( \varphi : A \rightarrow B \) be a morphism in an additive category. Then \( \varphi \) is a monomorphism if and only if \( 0 \rightarrow A \) is its kernel, and \( \varphi \) is an epimorphism if and only if \( B \rightarrow 0 \) is its cokernel.
Compare this statement with Proposition 1116.2.
## Proof. Let's do kernels this time.
First assume \( \varphi : A \rightarrow B \) is a monomorphism. If \( \zeta : Z \rightarrow A \) is any morphism such that the composition \( Z \rightarrow A \rightarrow B \) is 0, then \( \zeta \) is 0 by Lemma 1.3, and in particular \( \zeta \) factors (uniquely) through \( 0 \rightarrow A \) . This proves that \( 0 \rightarrow A \) is a kernel of \( \varphi \), as stated.
Conversely, assume that \( 0 \rightarrow A \) is a kernel for \( \varphi : A \rightarrow B \), and let \( \zeta : Z \rightarrow A \) be a morphism such that \( \varphi \circ \zeta = 0 \) . It follows that \( \zeta \) factors through \( 0 \rightarrow A \), since the latter is a kernel for \( \varphi \) :

This implies \( \zeta = 0 \), proving that \( \varphi \) is a monomorphism.
The statement about epimorphisms and cokernels is left to the reader (Exercise 1.9).
In view of Lemma 1.5, we should be able to use diagrams
\[
0 \rightarrow A \rightarrow B,\;A \rightarrow B \rightarrow 0
\]
to signal that \( A \rightarrow B \) is a monomorphism, resp., an epimorphism: think ’exact’. However, the fact that kernels and cokernels do not necessarily exist makes talking about exactness problematic in a category that is 'only' additive. This situation will be rectified very soon.
Incidentally, it is common to denote monomorphisms and epimorphisms by suitably decorated arrows; popular choices are \( \leftrightarrow \) and \( \rightarrow \), respectively.
1.3. Abelian categories. The moral at this point is that if a morphism in an additive category has kernels and cokernels, then these will behave as expected. But kernels and cokernels do not necessarily exist, and this prevents us from going much further. Also, while (as we have seen) kernels are monomorphisms and cokernels are epimorphisms in an additive category, there is no guarantee that monomorphisms should necessarily be kernels and epimorphisms should be cokernels. In the end, we simply demand these additional features explicitly.
Definition 1.6. An additive category A is abelian if kernels and cokernels exist in A; every monomorphism is the kernel of some morphism; and every epimorphism is the cokernel of some morphism.
As mentioned already, \( R \) -Mod is an abelian category, for every ring \( R \) . The prototype of an abelian category is \( \mathrm{{Ab}} \) : this 1 is why these categories are called abelian.
Since kernels are necessarily monomorphisms (by Lemma 1.4), we see that in an abelian category we can adopt a mantra entirely analogous to the useful 'kernel \( \Leftrightarrow \) submodule’ of [111]5.3 vintage: in abelian categories, the slogan would be ’kernel \( \Leftrightarrow \) monomorphism’ (and similarly for cokernels vs. epimorphisms).
Remark 1.7. Just as it is convenient to think of monomorphisms \( A \mapsto B \) as defining \( A \) as a ’subobject’ of \( B \), it is occasionally convenient to think of epimorphisms as ’quotients’: if \( \varphi : A \mapsto B \) is a monomorphism, we can use \( B/A \) to denote (the target of) \( \operatorname{coker}\varphi \) . We will have no real use for this notation in this section, but it will come in handy later on.
---
\( {}^{1} \) There is nothing particularly ’commutative’ about an abelian category.
---
The very existence of kernels and cokernels links these two notions tightly in an abelian category:
Lemma 1.8. In an abelian category \( \mathrm{A} \), every kernel is the kernel of its cokernel; every cokernel is the cokernel of its kernel.
Proof. I will prove the second half and leave the first half to the reader (Exercise 1.9).
Let \( \varphi : A \rightarrow B \) be the cokernel of some morphism \( Z \rightarrow A \) ; since \( \mathrm{A} \) is abelian, \( \varphi \) has a kernel \( \iota : K \rightarrow A \) . The composition \( Z \rightarrow A \rightarrow B \) is 0, so \( Z \rightarrow A \) factors through \( \iota \) by definition of kernel:

Now let \( A \rightarrow C \) be a morphism such that the composition \( K \rightarrow A \rightarrow C \) is the zero-morphism; then so is the composition \( Z \rightarrow A \rightarrow C \) . Therefore \( A \rightarrow C \) factors through a unique morphism \( B \rightarrow C \) ,

since \( \varphi \) is the cokernel of \( Z \rightarrow A \) . But this shows that \( \varphi : A \rightarrow B \) satisfies the property defining the cokernel of its kernel \( K \rightarrow A \), as stated.
Putting together Lemma 1.8 and Lemma 1.5, we can rephrase Definition 1.6 by listing the following requirements on a category A:
- A is additive;
- kernels and cokernels exist in A;
- if \( \varphi : A \rightarrow B \) is a morphism whose kernel is 0, then \( \varphi \) is the kernel of its cokernel;
- if \( \psi : B \rightarrow C \) is a morphism whose cokernel is 0, then \( \psi \) is the cokernel of its kernel.
This is a popular equivalent reformulation of the definition of abelian category. The last two requirements should call to mind the exact sequence
\[
0 \rightarrow A\xrightarrow[]{\varphi }B\xrightarrow[]{\psi }C \rightarrow 0
\]
familiar from the \( R \) -Mod context. The reader can already entertain the sense in which such a sequence can be 'exact' in an abelian category: the third requirement identifies \( A \) (or, rather, \( A \rightarrow B \) ) with the kernel of \( \psi \), and the fourth one identifies \( C \) with the cokernel of \( \varphi \) .
Now suppose that \( A \rightarrow B \) is sandwiched between zeros:
\[
0 \rightarrow A \succ \rightarrow B \rightarrow 0
\]
in an exact sequence. One of our many Pavlovian reactions would make us want to deduce that \( A \rightarrow B \) is an isomorphism, because this is the case in \( R \) -Mod, and this indeed works in any abelian category:
Lemma 1.9. Let \( \varphi : A \rightarrow B \) be a morphism in an abelian category \( \mathrm{A} \), and assume that \( \varphi \) is both a monomorphism and an epimorphism. Then \( \varphi \) is an isomorphism.
Remark 1.10. This fact does not hold in general additive categories. In fact, the reader will encounter in Exercise 3.4 an example of an additive category with kernels and cokernels (but nevertheless not abelian) and with morphisms that are both monomorphisms and epimorphisms, without being isomorphisms.
Proof. By Lemma 1.5 the kernel of \( \varphi \) is \( 0 \rightarrow A \), since \( \varphi \) is a monomorphism. Similarly, \( B \rightarrow 0 \) is a cokernel of \( \varphi \) . Further, \( \varphi \) is the cokernel of \( 0 \rightarrow A \) and the kernel of \( B \rightarrow 0 \), by Lemma 1.8 .
Now consider the identity \( B \rightarrow B \) :

Since \( B \rightarrow B \rightarrow 0 \) is (trivially) the zero morphism and \( \varphi \) is the kernel of \( B \rightarrow 0 \) , we obtain a unique morphism \( \psi : B \rightarrow A \) making the diagram commute:

As \( {\varphi \psi } = {\operatorname{id}}_{B} \), this shows that \( \varphi \) has a right-inverse. Similarly, consider the identity \( A \rightarrow A \), as follows:

The composition \( 0 \rightarrow A \rightarrow A \) is the zero morphism, and \( \varphi \) is the cokernel of \( 0 \rightarrow A \) , so we have a unique morphism \( \eta : B \rightarrow A \) making this diagram commute:

This says \( {\eta \varphi } = {\operatorname{id}}_{A} \), so \( \varphi \) has a left-inverse as well.
Thus, \( \varphi \) has both a left-inverse \( \eta \) and a right-inverse \( \psi \) . It follows that \( \eta = \psi \) is a two-sided inverse of \( \varphi \) and that \( \varphi \) is an isomorphism as promised (cf. Proposition II.2).
The reader should note the arrow-theoretic nature of this argument. In the category \( R \) -Mod we could have given the following (possibly) simpler argument: by Proposition 1116.2, monomorphisms are injective and epimorphisms are surjective; therefore \( \varphi \) is bijective, and bijective homomorphisms are isomorphisms by Exercise III 5.12 Such set-theoretic arguments are not an option in an arbitrary abelian category (at least until we develop the material of (2), since the objects of an abelian category are not given as 'sets'. Judicious use of appropriate universal properties accomplishes the same goal and may be argued to convey a 'deeper' sense of why the proven statement is true.
1.4. Products, coproducts, and direct sums. I will denote 2 the product of two objects \( A, B \) of an abelian category A by \( A \times B \), and I will denote their coproduct by \( A \coprod B \) . Both exist, since \( \mathrm{A} \) is additive.
The presence of these objects, and of kernels and cokernels, gives us access to other interesting constructions.
Example 1.11. For instance, fibered products (or 'pull-backs') exist in any abelian category, just as in \( R \) -Mod (cf. Exercise III 6.10). Consider a diagram
 be a morphism in an additive category. Then \( \varphi \) is a monomorphism if and only if \( 0 \rightarrow A \) is its kernel, and \( \varphi \) is an epimorphism if and only if \( B \rightarrow 0 \) is its cokernel.
|
Let's do kernels this time.
First assume \( \varphi : A \rightarrow B \) is a monomorphism. If \( \zeta : Z \rightarrow A \) is any morphism such that the composition \( Z \rightarrow A \rightarrow B \) is 0, then \( \zeta \) is 0 by Lemma 1.3, and in particular \( \zeta \) factors (uniquely) through \( 0 \rightarrow A \). This proves that \( 0 \rightarrow A \) is a kernel of \( \varphi \), as stated.
Conversely, assume that \( 0 \rightarrow A \) is a kernel for \( \varphi : A \rightarrow B \), and let \( \zeta : Z \rightarrow A \) be a morphism such that \( \varphi \circ \zeta = 0 \). It follows that \( \zeta \) factors through \( 0 \rightarrow A \), since the latter is a kernel for \( \varphi \):

This implies \( \zeta = 0 \), proving that \( \varphi \) is a monomorphism.
The statement about epimorphisms and cokernels is left to the reader (Exercise 1.9).
|
Theorem 6.94 (Sum rule for limiting directional subdifferentials). Let \( X \) be a WCG space, and let \( f = {f}_{1} + \cdots + {f}_{k} \), where \( {f}_{1},\ldots ,{f}_{k} \in \mathcal{L}\left( X\right) \) . Then
\[
{\partial }_{\ell }f\left( \bar{x}\right) \subset {\partial }_{\ell }{f}_{1}\left( \bar{x}\right) + \cdots + {\partial }_{\ell }{f}_{k}\left( \bar{x}\right) .
\]
(6.70)
Proof. We know that \( X \) is H-smooth. Let \( {\bar{x}}^{ * } \in {\partial }_{\ell }f\left( \bar{x}\right) \), and let \( \left( {x}_{n}\right) \rightarrow \bar{x},\left( {x}_{n}^{ * }\right) \overset{ * }{ \rightarrow }{\bar{x}}^{ * } \) with \( {x}_{n}^{ * } \in {\partial }_{H}f\left( {x}_{n}\right) \) for all \( n \) . Given a weak* closed neighborhood \( V \) of 0 in \( {X}^{ * } \) and a sequence \( \left( {\varepsilon }_{n}\right) \rightarrow {0}_{ + } \), by the fuzzy sum rule for Hadamard subdifferentials (Theorem 4.69) there are sequences \( \left( \left( {{x}_{i, n},{x}_{i, n}^{ * }}\right) \right) \in {\partial }_{H}{f}_{i} \), for \( i \in {\mathbb{N}}_{k} \), such that \( d\left( {{x}_{i, n},{x}_{n}}\right) \leq {\varepsilon }_{n},{x}_{n}^{ * } \in {x}_{1, n}^{ * } + \cdots + {x}_{k, n}^{ * } + V \) . Since for some \( r > 0 \) one has \( {x}_{i, n}^{ * } \in r{B}_{{X}^{ * }} \) for all \( \left( {i, n}\right) \in {\mathbb{N}}_{k} \times \mathbb{N} \), one can find \( {y}_{i}^{ * } \in {\partial }_{\ell }{f}_{i}\left( \bar{x}\right) \) such that \( \left( {x}_{i, n}^{ * }\right) { \rightarrow }^{ * }{y}_{i}^{ * } \) for \( i \in {\mathbb{N}}_{k} \) and \( {\bar{x}}^{ * } \in {y}_{1}^{ * } + \cdots + {y}_{k}^{ * } + V \) . Since \( S \mathrel{\text{:=}} {\partial }_{\ell }{f}_{1}\left( \bar{x}\right) + \cdots + {\partial }_{\ell }{f}_{k}\left( \bar{x}\right) \) is weak* compact and \( {\bar{x}}^{ * } \in S + V \) for every weak \( {}^{ * } \) closed neighborhood \( V \) of 0, one gets \( {\bar{x}}^{ * } \in S \) .
Theorem 6.95 (Chain rule for limiting directional subdifferentials). Let \( X \) and \( Y \) be WCG spaces, let \( g : X \rightarrow Y \) be a map that is Lipschitzian around \( \bar{x} \in X \), and let \( h : Y \rightarrow {\mathbb{R}}_{\infty } \) be Lipschitzian around \( \bar{y} \mathrel{\text{:=}} g\left( \bar{x}\right) \) . Then
\[
{\partial }_{\ell }\left( {h \circ g}\right) \left( \bar{x}\right) \subset {D}_{\ell }^{ * }g\left( \bar{x}\right) \left( {{\partial }_{\ell }h\left( \bar{y}\right) }\right) .
\]
Proof. Let \( f \mathrel{\text{:=}} h \circ g \) and let \( {\bar{x}}^{ * } \in {\partial }_{\ell }f\left( \bar{x}\right) \) . Let \( r \) (resp. \( s \) ) be the Lipschitz rate of \( f \) (resp. \( h \) ) on a neighborhood of \( \bar{x} \) (resp. \( \bar{y} \) ) and let \( G \) be the graph of \( g \) . Then, by the penalization lemma, for \( x \) near \( \bar{x} \) one has
\[
f\left( x\right) = \inf \{ f\left( w\right) + r\parallel w - x\parallel : w \in X\} = \inf \{ h\left( y\right) + r\parallel w - x\parallel : \left( {w, y}\right) \in G\}
\]
\[
= \inf \left\{ {h\left( y\right) + r\parallel w - x\parallel + \left( {r + s}\right) {d}_{G}\left( {w, y}\right) : \left( {w, y}\right) \in X \times Y}\right\} .
\]
Let \( j : X \times X \times Y \rightarrow \mathbb{R} \) be defined by \( j\left( {x, w, y}\right) \mathrel{\text{:=}} h\left( y\right) + r\parallel x - w\parallel + \left( {r + s}\right) {d}_{G}\left( {w, y}\right) \) . For every sequence \( \left( {x}_{n}\right) \rightarrow \bar{x} \), one has \( \left( \left( {{x}_{n},{x}_{n}, g\left( {x}_{n}\right) }\right) \right) \rightarrow \left( {\bar{x},\bar{x},\bar{y}}\right) \) and \( j\left( {{x}_{n},{x}_{n}, g\left( {x}_{n}\right) }\right) \) \( = f\left( {x}_{n}\right) \) for all \( n \) . Proposition 6.92 implies that \( \left( {{\bar{x}}^{ * },0,0}\right) \in {\partial }_{\ell }j\left( {\bar{x},\bar{x},\bar{y}}\right) \) . Theorem 6.94 yields \( \left( {{\bar{w}}^{ * },{\bar{v}}^{ * }}\right) \in {\partial }_{\ell }{d}_{G}\left( {\bar{x},\bar{y}}\right) ,{\bar{y}}^{ * } \in {\partial }_{\ell }h\left( \bar{y}\right) ,{\bar{z}}^{ * } \in {B}_{{X}^{ * }} \) such that
\[
\left( {{\bar{x}}^{ * },0,0}\right) = \left( {0,0,{\bar{y}}^{ * }}\right) + r\left( {{\bar{z}}^{ * }, - {\bar{z}}^{ * },0}\right) + \left( {r + s}\right) \left( {0,{\bar{w}}^{ * },{\bar{v}}^{ * }}\right) .
\]
Then one has \( {\bar{x}}^{ * } = r{\bar{z}}^{ * } = \left( {r + s}\right) {\bar{w}}^{ * } \in \left( {r + s}\right) {D}_{\ell }g\left( \bar{x}\right) \left( {-{\bar{v}}^{ * }}\right) = {D}_{\ell }g\left( \bar{x}\right) \left( {\bar{y}}^{ * }\right) \) .
## Exercises
1. Show that for \( f, g \in \mathcal{L}\left( X\right) \) the inclusion \( {\partial }_{\ell }\left( {f + g}\right) \left( \bar{x}\right) \subset {\partial }_{\ell }f\left( \bar{x}\right) + {\partial }_{\ell }g\left( \bar{x}\right) \) may be strict. [Hint: Take \( X \mathrel{\text{:=}} \mathbb{R}, f \mathrel{\text{:=}} \left| \cdot \right|, g \mathrel{\text{:=}} - \left| \cdot \right| \) .]
2. Deduce from the preceding exercise that inclusion (6.68) may be strict. [Hint: Take \( m = 2 \) and \( h \) given by \( h\left( {{y}_{1},{y}_{2}}\right) = {y}_{1} + {y}_{2} \) .]
3. (a) Declare that a subset \( S \) of a normed space \( X \) is directionally normally compact at \( \bar{x} \in S \) if for all sequences \( \left( {x}_{n}\right) { \rightarrow }_{S}\bar{x},\left( {x}_{n}^{ * }\right) \overset{ * }{ \rightarrow }0 \) with \( {x}_{n}^{ * } \in {N}_{D}\left( {S,{x}_{n}}\right) \) for all \( n \in \mathbb{N} \) , one has \( \left( {x}_{n}^{ * }\right) \rightarrow 0 \) . Compare this property with normal compactness at \( \bar{x} \) .
(b) Prove that \( S \) is directionally normally compact at \( \bar{x} \in S \) if and only if every sequence \( \left( {x}_{n}^{ * }\right) \) in \( {S}_{{X}^{ * }} \) has a nonnull weak* cluster value whenever, for some sequence \( \left( {x}_{n}\right) { \rightarrow }_{S}\bar{x} \), it satisfies \( {x}_{n}^{ * } \in {N}_{D}\left( {S,{x}_{n}}\right) \) for all \( n \in \mathbb{N} \) .
(c) Give criteria for directional normal compactness.
## 6.7 Notes and Remarks
Most of the ingredients of the first five sections of this chapter are inspired by pioneer notes and papers by Kruger [594-599], the numerous papers of Mor-dukhovich starting with \( \left\lbrack {{713},{714}}\right\rbrack \), and his monograph \( \left\lbrack {718}\right\rbrack \) . For what concerns calculus rules, we rely on the idea in \( \left\lbrack {{547},{658},{806},{813}}\right\rbrack \) that a good collective behavior gives better results than the assumption that most factors are nice (or one factor in the case of a pair). The reader may find difficulties in this chapter due to the fact that it presents several versions of this idea of good collective behavior. Thus, it may be advisable to skip all but one of them on a first reading, for instance alliedness. On the other hand, the reader may notice that almost all results could be deduced from results pertaining to multimaps. Such a direct route would make the presentation much shorter. However, we have preferred a slower pace that starts with sets rather than multimaps. In such a way, our starting point is less complex, and the reader who just needs a result about sets has simpler access to it.
The basic idea of passing to the limit enables one to gather precious information about the behavior of the function or the set around the specific point of interest. The accuracy of the elementary normal cones and subdifferentials may be lost, but to a lesser extent than with Clarke's notions. This advantage is due to the fact that no automatic convexification occurs. On the other hand, in such an approach, one cannot expect the beautiful duality relationships of Clarke's concepts. The idea of taking limits appeared as early as 1976 in Mordukhovich's pioneer paper [713] and in his book [716], where it is used for the needs of optimal control theory. The reader will find interesting developments about the history of the birth of limiting concepts in [531] and in the commentary of Chap. 1 of the monograph [718].
A decisive advantage of the notions of the present chapter lies in the calculus rules that are precise inclusions rather than approximate rules, at least under some qualification conditions. These rules and constructs are particularly striking in finite-dimensional spaces. Such rules for the case in which all the functions but one are Lipschitzian were first proved in the mimeographed paper [513] and announced in [512]. The finite-dimensional calculus for lower semicontinuous functions was first presented in [520]. The qualification conditions there were more restrictive than those announced in the note [715], but the proofs with the latter are identical to those with Ioffe's qualification conditions. Similar conditions appeared in Rockafellar's paper [879]. The calculus in the infinite-dimensional case was first presented in Kruger's paper [595]. The Asplund space theory appeared only in 1996 [721, 722], although the possibility of an extension to Asplund spaces was indicated by Fabian \( \left\lbrack {{362},{363},{373}}\right\rbrack \), who was the first to apply separable reduction but did not work with limiting constructions.
Normal compactness of a subset appeared in [804] in the convex case and in [807] in the nonconvex case. Its discovery was influenced by some methods of Brézis [170] and Browder [176] in nonlinear functional analysis and by the general views of [787] about compactness properties. The work of Loewen [671] was also decisive. The latter elaborated upon the notion of compactly epi-Lipschitzian set due to Borwein and Strojwas [131], which generalizes the notion of epi-Lipschitzian set as explained in Sect. 6.2. The latter notion was introduced by Rockafellar \( \left\lbrack {{874},{875}}\right\rbrack \) as a convenient qualification condition. Comprehensive characterizations of compactly epi-Lipschitzian closed conve
|
Theorem 6.94 (Sum rule for limiting directional subdifferentials). Let \( X \) be a WCG space, and let \( f = {f}_{1} + \cdots + {f}_{k} \), where \( {f}_{1},\ldots ,{f}_{k} \in \mathcal{L}\left( X\right) \) . Then
\[
{\partial }_{\ell }f\left( \bar{x}\right) \subset {\partial }_{\ell }{f}_{1}\left( \bar{x}\right) + \cdots + {\partial }_{\ell }{f}_{k}\left( \bar{x}\right) .
\]
|
Proof. We know that \( X \) is H-smooth. Let \( {\bar{x}}^{ * } \in {\partial }_{\ell }f\left( \bar{x}\right) \), and let \( \left( {x}_{n}\right) \rightarrow \bar{x},\left( {x}_{n}^{ * }\right) \overset{ * }{ \rightarrow }{\bar{x}}^{ * } \) with \( {x}_{n}^{ * } \in {\partial }_{H}f\left( {x}_{n}\right) \) for all \( n \) . Given a weak* closed neighborhood \( V \) of 0 in \( {X}^{ * } \) and a sequence \( \left( {\varepsilon }_{n}\right) \rightarrow {0}_{ + } \), by the fuzzy sum rule for Hadamard subdifferentials (Theorem 4.69) there are sequences \( \left( \left( {{x}_{i, n},{x}_{i, n}^{ * }}\right) \right) \in {\partial }_{H}{f}_{i} \), for \( i \in {\mathbb{N}}_{k} \), such that \( d\left( {{x}_{i, n},{x}_{n}}\right) \leq {\varepsilon }_{n},{x}_{n}^{ * } \in {x}_{1, n}^{ * } + \cdots + {x}_{k, n}^{ * } + V \) . Since for some \( r > 0 \) one has \( {x}_{i, n}^{ * } \in r{B}_{{X}^{ * }} \) for all \( \left( {i, n}\right) \in {\mathbb{N}}_{k} \times \mathbb{N} \), one can find \( {y}_{i}^{ * } \in {\partial }_{\ell }{f}_{i}\left( \bar{x}\right) \) such that \( \left( {x}_{i, n}^{ * }\right) { \rightarrow }^{ * }{y}_{i}^{ * } \) for \( i \in {\mathbb{N}}_{k} \) and \( {\bar{x}}^{ * } = {y}_{1}^{ * } +
|
Theorem 7. Mazur's Theorem. The closed convex hull of a totally bounded set in a complete locally convex linear topological space is compact.
Proof. Let \( K \) be such a set in such a space. By the preceding lemma, \( \operatorname{co}\left( K\right) \) is totally bounded. Hence \( \overline{\mathrm{{co}}}\left( K\right) \) is closed and totally bounded. Since the ambient space is complete, \( \overline{\mathrm{{co}}}\left( K\right) \) is complete and totally bounded. Hence, by Theorem 6, it is compact.
## 7.8 Analytic Pitfalls
The purpose of this section is to frighten (or amuse) the reader by exhibiting some examples where erroneous conclusions are reached through an analysis that seems at first glance to be sound. In every case, however, some theorem pertinent to the situation has been overlooked. The relevant theorems are all quoted somewhere in this section or elsewhere in the book. Proofs or references are given for each of them. A connecting thread for many of these examples is the question of whether interchanging the order of two limit processes is justified. We begin with some matters from the subject of Calculus.
Here is an elementary example to show what can go wrong:
\[
\mathop{\lim }\limits_{{x \rightarrow 0}}\mathop{\lim }\limits_{{y \rightarrow 0}}\frac{x - y}{x + y} = \mathop{\lim }\limits_{{x \rightarrow 0}}\frac{x}{x} = 1
\]
\[
\mathop{\lim }\limits_{{y \rightarrow 0}}\mathop{\lim }\limits_{{x \rightarrow 0}}\frac{x - y}{x + y} = \mathop{\lim }\limits_{{y \rightarrow 0}}\frac{-y}{y} = - 1
\]
A theorem governing this situation (and many others) is E.H. Moore's theorem, proved later.
It is natural to think that if a function is defined by a series of analytic functions, then the resulting function should be continuous, continuously differentiable, and so on. (This was a commonly held view until the mid-1850s.) For example, the series
(1)
\[
f\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{2}^{n}}\cos \left( {{3}^{n}x}\right)
\]
consists of analytic terms, and the function \( f \) should be a "nice" one. We think that the function defined by the series should inherit the good properties of the terms in the series. Indeed, in this example, \( f \) is continuous, by the Weierstrass \( M \) -Test. This test, or theorem, goes as follows.
Theorem 1. (Weierstrass \( M \) -Test.) If the functions \( {g}_{n} \) are continuous on a compact Hausdorff space \( X \) and if
(2)
\[
\left| {{g}_{n}\left( x\right) }\right| \leq {M}_{n}\;\left( {\text{ for all }x \in X}\right) \;\text{ and }\;\mathop{\sum }\limits_{{n = 1}}^{\infty }{M}_{n} < \infty
\]
then the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{g}_{n}\left( x\right) \) converges uniformly on \( X \) and defines a function that is continuous on \( X \) .
The hypotheses in display (2) constitute the " \( M \) -Test." In modern notation, we could write instead \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\begin{Vmatrix}{g}_{n}\end{Vmatrix}}_{\infty } < \infty \) . In the example of Equation (1), one can set \( {g}_{n}\left( x\right) = {2}^{-n}\cos \left( {{3}^{n}x}\right) \) and see immediately that the constants \( {M}_{n} = {2}^{-n} \) serve in Weierstrass's Theorem.
The Weierstrass \( M \) -Test gives us some hypotheses under which we can interchange two limits:
\[
\mathop{\lim }\limits_{{h \rightarrow 0}}\mathop{\lim }\limits_{{m \rightarrow \infty }}\mathop{\sum }\limits_{{n = 1}}^{m}{g}_{n}\left( {x + h}\right) = \mathop{\lim }\limits_{{m \rightarrow \infty }}\mathop{\lim }\limits_{{h \rightarrow 0}}\mathop{\sum }\limits_{{n = 1}}^{m}{g}_{n}\left( {x + h}\right)
\]
Returning to the function \( f \) in Equation (1), we propose to compute \( {f}^{\prime } \) by differentiating term by term in the series, getting
\[
{f}^{\prime }\left( x\right) = - \mathop{\sum }\limits_{{n = 1}}^{\infty }{3}^{n}{2}^{-n}\sin \left( {{3}^{n}x}\right)
\]
But here there is an alarming difference, as the factors \( {3}^{n}{2}^{-n} \) are growing, not shrinking. The very convergence of the series is questionable.
This example, \( f \), is the famous Non-Differentiable Function of Weierstrass. It is not differentiable at any point whatsoever! A detailed proof can be found in [Ti2] or [Ch]. A sketch showing a partial sum of the series is in Figure 7.2.

Figure 7.2 A partial sum in the non-differentiable function
When we take more terms and blow up the picture, we see more or less the same behavior, which reminds us of fractals. See Figure 7.3, where a magnification factor of about 15 has been used.

Figure 7.3 Another partial sum, magnified
Now for the positive side of this question concerning differentiating a series term by term: A classical theorem that can be found, for example, in [Wi] is as follows.
Theorem 2. If the functions \( {g}_{n} \) are continuously differentiable on a closed and bounded interval, if the series \( \mathop{\sum }\limits_{n}{g}_{n}\left( x\right) \) converges on that interval, and if the series \( \mathop{\sum }\limits_{n}{g}_{n}^{\prime }\left( x\right) \) converges uniformly on that interval, then \( {\left( \mathop{\sum }\limits_{n}{g}_{n}\right) }^{\prime } = \mathop{\sum }\limits_{n}{g}_{n}^{\prime } \) .
Since differentiation involves a limiting process, the theorem just quoted is again providing hypotheses to justify the interchange of two limits.
What can be said, in general, to legitimate interchanging limits? A famous theorem of Eliakim Hastings Moore gives one possible answer to this question.
Theorem 3. Let \( f : \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{R} \) . Assume that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}f\left( {n, m}\right) \) exists for each \( m \) and that \( \mathop{\lim }\limits_{{m \rightarrow \infty }}f\left( {n, m}\right) \) exists for each \( n \), uniformly in \( n \) . Then the two limits \( \mathop{\lim }\limits_{n}\mathop{\lim }\limits_{m}f\left( {n, m}\right) \) and \( \mathop{\lim }\limits_{m}\mathop{\lim }\limits_{n}f\left( {n, m}\right) \) exist and are equal.
Proof. Define \( g\left( m\right) = \mathop{\lim }\limits_{n}f\left( {n, m}\right) \) and \( h\left( n\right) = \mathop{\lim }\limits_{m}f\left( {n, m}\right) \) . Let \( \varepsilon > 0 \) . Find a positive integer \( M \) such that
\[
m \geq M \Rightarrow \left| {f\left( {n, m}\right) - h\left( n\right) }\right| < \varepsilon \;\text{ for all }n
\]
Notice that the uniformity hypothesis is being used at this step. A consequence is that \( \left| {f\left( {n, M}\right) - h\left( n\right) }\right| < \varepsilon \), and by the triangle inequality \( \left| {f\left( {n, m}\right) - f\left( {n, M}\right) }\right| < {2\varepsilon } \) when \( m \geq M \) . Find \( N \) such that
\[
n \geq N \Rightarrow \left| {f\left( {n, M}\right) - g\left( M\right) }\right| < \varepsilon
\]
No uniformity of the limit in \( m \) is needed here, as \( M \) has been fixed. Now we have \( \left| {f\left( {N, M}\right) - g\left( M\right) }\right| < \varepsilon \) and \( \left| {f\left( {N, M}\right) - f\left( {n, M}\right) }\right| < {2\varepsilon } \) when \( n \geq N \) . We next conclude that \( \left| {f\left( {n, m}\right) - f\left( {N, M}\right) }\right| < {4\varepsilon } \) when \( n \geq N \) and \( m \geq M \) . This establishes that the doubly indexed sequence \( f\left( {n, m}\right) \) has the Cauchy property. By the completeness of \( \mathbb{R} \), the limit \( \mathop{\lim }\limits_{{\left( {n, m}\right) \rightarrow \left( {\infty ,\infty }\right) }}f\left( {n, m}\right) \) exists. Call it \( L \) . Then,
by letting \( \left( {n, m}\right) \) go to its limit, we conclude that \( \left| {L - f\left( {N, M}\right) }\right| \leq {4\varepsilon } \) . Also, \( \left| {L - f\left( {n, m}\right) }\right| < {8\varepsilon } \) if \( n \geq N \) and \( m \geq M \) . Letting \( n \) go to its limit, we get \( \left| {L - g\left( m\right) }\right| < {8\varepsilon } \) if \( m \geq M \) . By letting \( m \) go to its limit, we get \( \left| {L - h\left( n\right) }\right| \leq {8\varepsilon } \) if \( n \geq N \) . Hence \( h\left( n\right) \rightarrow L \) and \( g\left( m\right) \rightarrow L \) .
Moore's theorem is actually more general: The range space can be any complete metric space, and the sequences can be replaced by "generalized" sequences ("nets"). See [DS], page 28. The reader will find it a pleasant exercise in the use of these concepts to carry out the proof in the more general case.
Another case in which the interchange of limits creates difficulties is presented next in the form of a problem.
Problem. Let \( U \) be an orthonormal sequence in a Hilbert space, say \( U = \) \( \left\{ {{u}_{1},{u}_{2},\ldots }\right\} \) . Is it true that each point in the closed convex hull of \( U \) is representable as an infinite series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{u}_{n} \), in which \( {a}_{n} \geq 0 \) and \( \sum {a}_{n} = 1 \) ?
At first, this seems to be almost obvious: We are simply allowing an "infinite" convex combination of elements from \( U \) in order to represent points in the closure of the convex hull of \( U \) . A proof might proceed as follows. (Here we use "co" for the convex hull and \( \overline{\mathrm{{co}}} \) for the closed convex hull.) Suppose that \( x \in \overline{\mathrm{{co}}}\left( U\right) \) . Then there exists a sequence \( {x}_{n} \in \operatorname{co}\left( U\right) \) such that \( {x}_{n} \rightarrow x \) . With no loss of generality, we may suppose that
\[
{x}_{n} = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ni}{u}_{i}\;\text{ where }\;{a}_{ni} \geq 0\text{ and }\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ni} = 1\text{ for all }n
\]
Letting \( n \) tend to \( \infty \), we arrive at \( x = \mathop{\sum }\limits_{{i = 1
|
Theorem 7. Mazur's Theorem. The closed convex hull of a totally bounded set in a complete locally convex linear topological space is compact.
|
Let \( K \) be such a set in such a space. By the preceding lemma, \( \operatorname{co}\left( K\right) \) is totally bounded. Hence \( \overline{\mathrm{{co}}}\left( K\right) \) is closed and totally bounded. Since the ambient space is complete, \( \overline{\mathrm{{co}}}\left( K\right) \) is complete and totally bounded. Hence, by Theorem 6, it is compact.
|
Proposition 9.6.8. For \( x \notin {\mathbb{Z}}_{ \leq 0} \) define
\[
\psi \left( x\right) = - \mathop{\lim }\limits_{{N \rightarrow \infty }}\left( {\mathop{\sum }\limits_{{m = 0}}^{N}\frac{1}{m + x} - \log \left( {N + x}\right) }\right) .
\]
(1) For \( k \geq 1 \) we have
\[
\mathop{\sum }\limits_{{m = 0}}^{N}\frac{1}{m + x} = - \psi \left( x\right) + \log \left( {N + x}\right) - \mathop{\sum }\limits_{{j = 1}}^{k}\frac{{B}_{j}}{j{\left( N + x\right) }^{j}} + {R}_{k}\left( {-1, x, N}\right) ,
\]
where
\[
{R}_{k}\left( {-1, x, N}\right) = {\int }_{N}^{\infty }\frac{{B}_{k}\left( {\{ t\} }\right) }{{\left( t + x\right) }^{k + 1}}{dt}
\]
and \( \left| {{R}_{k}\left( {-1, x, N}\right) }\right| \leq \left| {{B}_{k + 2}/\left( {\left( {k + 2}\right) {\left( N + x\right) }^{k + 2}}\right) }\right| \) when \( k \) is even.
(2) For \( k \geq 1 \) we have
\[
\psi \left( x\right) = \log \left( x\right) - \frac{1}{2x} - \mathop{\sum }\limits_{{j = 2}}^{k}\frac{{B}_{j}}{j{x}^{j}} + {\int }_{0}^{\infty }\frac{{B}_{k}\left( {\{ t\} }\right) }{{\left( t + x\right) }^{k + 1}}{dt}.
\]
(3) We have \( \mathop{\lim }\limits_{{s \rightarrow 1}}\left( {\zeta \left( {s, x}\right) - 1/\left( {s - 1}\right) }\right) = - \psi \left( x\right) \), in other words
\[
\zeta \left( {s, x}\right) = \frac{1}{s - 1} - \psi \left( x\right) + O\left( {s - 1}\right) .
\]
We will study below the properties of the function \( \psi \left( x\right) \), and in particular we will see that \( \psi \left( x\right) = {\Gamma }^{\prime }\left( x\right) /\Gamma \left( x\right) \) is the logarithmic derivative of the gamma function; see Definition 9.6.13. Indeed, since we will define the gamma function by \( \log \left( {\Gamma \left( x\right) }\right) = \frac{\partial \zeta }{\partial s}\left( {0, x}\right) - \frac{\partial \zeta }{\partial s}\left( {0,1}\right) \) and since \( \zeta \left( {s, x}\right) \) is meromorphic in \( s \) , around \( s = 1 \) we have
\[
\zeta \left( {s - 1, x}\right) = \zeta \left( {0, x}\right) + \left( {s - 1}\right) \log \left( {\Gamma \left( x\right) }\right) + \cdots
\]
\[
= 1/2 - x + \left( {s - 1}\right) \log \left( {\Gamma \left( x\right) }\right) + \cdots
\]
(using \( \zeta \left( {0, x}\right) = 1/2 - x \), which is immediate from Proposition 9.6.7), so that by Proposition 9.6.2,
\[
- \left( {s - 1}\right) \zeta \left( {s, x}\right) = \frac{\partial \zeta }{\partial x}\left( {s - 1, x}\right) = - 1 + \left( {s - 1}\right) \psi \left( x\right) + \cdots ,
\]
as claimed in the proposition.
Corollary 9.6.9. As \( x \rightarrow \infty \) we have:
(1) For \( \Re \left( s\right) \geq 1 \) and \( s \neq 1 \) ,
\[
\zeta \left( {s, x}\right) = \frac{{x}^{1 - s}}{s - 1} + O\left( {x}^{-s}\right) .
\]
(2) For \( \Re \left( s\right) < 1 \) ,
\[
\zeta \left( {s, x}\right) = - \frac{{x}^{1 - s}}{1 - s} + \frac{{x}^{-s}}{2} - \mathop{\sum }\limits_{{j = 1}}^{p}\left( \begin{matrix} - s \\ {2j} \end{matrix}\right) \frac{{B}_{2j}}{2j}{x}^{-s + 1 - {2j}} + O\left( {x}^{-1}\right) ,
\]
where \( p = \lfloor \left( {3 - \Re \left( s\right) }\right) /2\rfloor \) .
Proof. Clear by Proposition 9.6.7.
Corollary 9.6.10. If \( k \in {\mathbb{Z}}_{ \geq 1} \) we have
\[
\zeta \left( {1 - k, x}\right) = - \frac{{B}_{k}\left( x\right) }{k}
\]
and in particular \( \zeta \left( {1 - k}\right) = - {B}_{k}/k - {\delta }_{k,1} \) .
Proof. Setting \( \alpha = k - 1 \) in Proposition 9.6.7, we find that for \( n \geq k \) ,
\[
- {k\zeta }\left( {1 - k, x}\right) = {x}^{k} + \mathop{\sum }\limits_{{j = 1}}^{n}\left( \begin{array}{l} k \\ j \end{array}\right) {B}_{j}{x}^{k - j} = {B}_{k}\left( x\right) .
\]
The statement for \( \zeta \left( {1 - k}\right) \) will be proved again in the next chapter using the functional equation of the zeta function. Historically it was the first indication of the existence of this functional equation, discovered by L. Euler.
Proposition 9.6.11. As \( x \rightarrow 0 \) we have
\[
\zeta \left( {s, x}\right) = \left\{ \begin{array}{ll} {x}^{-s} + \zeta \left( s\right) + o\left( 1\right) & \text{ if }\Re \left( s\right) \geq 0, \\ 1/2 + o\left( 1\right) & \text{ if }s = 0, \\ \zeta \left( s\right) + o\left( 1\right) & \text{ if }\Re \left( s\right) < 0, s \neq - {2k}\text{ with }k \in {\mathbb{Z}}_{ \geq 1}, \\ - {B}_{2k}x + O\left( {x}^{3}\right) & \text{ if }s = - {2k}\text{ with }k \in {\mathbb{Z}}_{ \geq 2}, \\ - {B}_{2}x + {x}^{2}/3 + O\left( {x}^{3}\right) & \text{ if }s = - 2. \end{array}\right.
\]
Proof. For \( s \neq - {2k} \) with \( k \in {\mathbb{Z}}_{ \geq 1} \) this immediately follows from
\[
\zeta \left( {s, x}\right) = {x}^{-s} + \zeta \left( {s, x + 1}\right) = {x}^{-s} + \zeta \left( s\right) + o\left( 1\right) .
\]
For \( s = - {2k} \), by the above corollary we have \( \zeta \left( {-{2k}, x}\right) = - {B}_{{2k} + 1}\left( x\right) /\left( {{2k} + 1}\right) \) , so the result follows from the explicit formula for \( {B}_{n}\left( x\right) \) .
Proposition 9.6.12. We have the duplication formula
\[
\zeta \left( {s, x}\right) + \zeta \left( {s, x + \frac{1}{2}}\right) = {2}^{s}\zeta \left( {s,{2x}}\right)
\]
and more generally for \( N \in {\mathbb{Z}}_{ \geq 1} \) the distribution formula
\[
\mathop{\sum }\limits_{{0 \leq j < N}}\zeta \left( {s, x + \frac{j}{N}}\right) = {N}^{s}\zeta \left( {s,{Nx}}\right) .
\]
Proof. Follows from an easy rearrangement of terms and left to the reader (Exercise 64).
## 9.6.2 Definition of the Gamma Function
Since we have seen above that \( \zeta \left( {s, x}\right) \) can be extended to the whole complex plane with a simple pole at \( s = 1 \), the following definition makes sense.
Definition 9.6.13. (1) We define the real gamma function for \( x \in {\mathbb{R}}_{ > 0} \) by the formula
\[
\log \left( {\Gamma \left( x\right) }\right) = {\zeta }^{\prime }\left( {0, x}\right) - {\zeta }^{\prime }\left( {0,1}\right) = {\zeta }^{\prime }\left( {0, x}\right) - {\zeta }^{\prime }\left( 0\right) ,
\]
where here and elsewhere the derivative is taken with respect to the first variable.
(2) We define the real \( \psi \) function for \( x \in {\mathbb{R}}_{ > 0} \) as the logarithmic derivative of \( \Gamma \left( x\right) \) ; in other words, \( \psi \left( x\right) = {\Gamma }^{\prime }\left( x\right) /\Gamma \left( x\right) \) .
We will see later that in fact \( {\zeta }^{\prime }\left( 0\right) = - \log \left( {2\pi }\right) /2 \), but for the moment we do not need this result. We will also see how to generalize this definition to all \( x \in \mathbb{C} \smallsetminus {\mathbb{Z}}_{ \leq 0} \) .
As already mentioned, since the gamma function is very often used in conjunction with \( L \) -series, it is customary to use the variable \( s \) and not the variable \( x \), hence to write \( \Gamma \left( s\right) \) . The reader should be aware that although this will be the variable used in zeta and \( L \) -functions, it is not the variable \( s \) of the Hurwitz zeta function used to define the gamma function. For the moment, since we handle simultaneously \( \zeta \left( {s, x}\right) \) and the gamma function, we keep the variable \( x \), but we will switch to the variable \( s \) later, after the introduction of the complex gamma function.
We will study later in great detail the properties of the function \( \Gamma \left( x\right) \) . For the moment we note the following basic results.
Proposition 9.6.14. For all \( x \in {\mathbb{R}}_{ > 0} \) we have \( \Gamma \left( {x + 1}\right) = {x\Gamma }\left( x\right) \) and when \( n \in {\mathbb{Z}}_{ \geq 1} \) we have \( \Gamma \left( n\right) = \left( {n - 1}\right) ! \) .
Proof. Since \( \zeta \left( {s, x + 1}\right) = \zeta \left( {s, x}\right) - {x}^{-s} \) we obtain the first formula by derivation with respect to \( x \) . The second follows by induction since \( \log \left( {\Gamma \left( 1\right) }\right) = {\zeta }^{\prime }\left( {0,1}\right) - {\zeta }^{\prime }\left( {0,1}\right) = 0. \)
Proposition 9.6.15. (1) Let \( u \in {\mathbb{R}}_{ > 0} \) . For \( \left| x\right| < u \) we have
\[
\log \left( {\Gamma \left( {x + u}\right) }\right) = \log \left( {\Gamma \left( u\right) }\right) + \psi \left( u\right) x + \mathop{\sum }\limits_{{k \geq 2}}{\left( -1\right) }^{k}\frac{\zeta \left( {k, u}\right) }{k}{x}^{k}.
\]
(2) In particular, for \( \left| x\right| < 1 \) we have
\[
\log \left( {\Gamma \left( {x + 1}\right) }\right) = \mathop{\sum }\limits_{{k \geq 1}}{\left( -1\right) }^{k}\frac{\zeta \left( k\right) }{k}{x}^{k},
\]
where by convention we set \( \zeta \left( 1\right) = \gamma \), Euler’s constant.
Proof. This follows by differentiating with respect to \( s \) the first and second formulas of Corollary 9.6.3, and using the fact that around \( s = 1 \) we have \( \zeta \left( {s, u}\right) = 1/\left( {s - 1}\right) - \psi \left( u\right) + O\left( {s - 1}\right) \), and in particular \( \zeta \left( s\right) = 1/\left( {s - 1}\right) + \) \( \gamma + O\left( {s - 1}\right) \)
Proposition 9.6.16. For \( x > 0 \) we have for any \( k \geq 1 \) ,
\[
{\zeta }^{\prime }\left( {0, x}\right) = \left( {x - \frac{1}{2}}\right) \log \left( x\right) - x + \mathop{\sum }\limits_{{j = 1}}^{{k - 1}}\frac{{B}_{j + 1}}{j\left( {j + 1}\right) {x}^{j}} - \frac{1}{k}{\int }_{0}^{\infty }\frac{{B}_{k}\left( {\{ t\} }\right) }{{\left( t + x\right) }^{k}}{dt},
\]
and in particular
\[
{\zeta }^{\prime }\left( {0, x}\right) = \left( {x - \frac{1}{2}}\right) \log \left( x\right) - x - {\int }_{0}^{\infty }\frac{\{ t\} - 1/2}{t + x}{dt}
\]
and
\[
\log \left( {\Gamma \left( x\right) }\right) = \left( {x - \frac{1}{2}}\right) \log \left( x\right) - x + 1 + \left( {x - 1}\right) {\int }_{1}^{\infty }\frac{\{ t\} - 1/2}{t\left( {t + x - 1}\right) }{dt}.
\]
Proof. This follows by derivation after a short computation from the formula for \( \zeta \left( {-\alpha, x}\right) \) given in Proposition 9.6.7.
Remark. As already noted in Section 9.2.5, the integral \( {\int }_{0}^{\
|
Proposition 9.6.8. For \( x \notin {\mathbb{Z}}_{ \leq 0} \) define
\[
\psi \left( x\right) = - \mathop{\lim }\limits_{{N \rightarrow \infty }}\left( {\mathop{\sum }\limits_{{m = 0}}^{N}\frac{1}{m + x} - \log \left( {N + x}\right) }\right) .
\]
(1) For \( k \geq 1 \) we have
\[
\mathop{\sum }\limits_{{m = 0}}^{N}\frac{1}{m + x} = - \psi \left( x\right) + \log \left( {N + x}\right) - \mathop{\sum }\limits_{{j = 1}}^{k}\frac{{B}_{j}}{j{\left( N + x\right) }^{j}} + {R}_{k}\left( {-1, x, N}\right) ,
\]
where
\[
{R}_{k}\left( {-1, x, N}\right) = {\int }_{N}^{\infty }\frac{{B}_{k}\left( {\{ t\} }\right) }{{\left( t + x\right) }^{k + 1}}{dt}
\]
and \( \left| {{R}_{k}\left( {-1, x, N}\right) }\right| \leq \left| {{B}_{k + 2}/\left( {\left( {k + 2}\right) {\left( N + x\right) }^{k + 2}}\right) }\right| \) when \( k \) is even.
(2) For \( k \geq 1 \) we have
\[
\psi \left( x\right) = \log \left( x\right) - \frac{1}{2x} - \mathop{\sum }\limits_{{j = 2}}^{k}\frac{{B}_{j}}{j{x}^{j}} + {\int }_{0}^{\infty }\frac{{B}_{k}\left( {\{ t\} }\right) }{{\left( t + x\right) }^{k + 1}}{dt}.
\]
(3) We have \( \mathop{\lim }\limits_{{s \rightarrow 1}}\left( {\zeta \left( {s, x}\right) - 1/\left( {s - 1}\right) }\right) = - \psi \left( x\right) \), in other words
\[
\zeta \left( {s, x}\right) = \frac{1}{s - 1} - \psi \left( x\right) + O\left( {s - 1}\right) .
\]
|
The proof for Proposition 9.6.8 is not provided in the text block you provided. Therefore, the proof process is "null".
|
Example 5.1.9. If \( M \) is a compact surface with non-empty boundary then each path component of \( \partial M \) is a circle, and if \( C \) is one of those circles then the space obtained from \( M \) by attaching a 2-cell using as attaching map an embedding \( {S}^{1} \rightarrow M \) whose image is \( C \) is again a surface. Thus, by attaching finitely many 2-cells in this way \( M \) becomes a closed surface, i.e., becomes a \( {T}_{g} \) or a \( {U}_{h} \) . Let \( {T}_{g, d} \) and \( {U}_{h, d} \) denote the surfaces obtained by removing the interiors of \( d \) unknotted \( {}^{6} \) 2-balls. Then every compact path connected surface whose boundary consists of \( d \) circles is homeomorphic to \( {T}_{g, d} \) or to \( {U}_{h, d} \) . When \( d > 0 \) , \( {T}_{g, d} \) and \( {U}_{h, d} \) contain graphs as strong deformation retracts; hence they are aspherical with free fundamental groups, by 3.1.9 and 3.1.16.
## Exercises
1. Prove 5.1.3-5.1.5. Then prove 5.1.1.
2. Show that when \( p : \bar{X} \rightarrow X \) is a covering projection, \( \bar{X} \) is a manifold iff \( X \) is a manifold.
3. Show that every path connected 1-manifold is homeomorphic to \( {S}^{1}, I,\mathbb{R} \) or \( \lbrack 0,\infty ) \) .
4. Show that if \( M \) is an \( n \) -manifold then \( M\# {S}^{n} \) is homeomorphic to \( M \) .
5. Show that \( {T}_{g + 1, d} \) is homeomorphic to \( {T}_{r, k}\# {T}_{g + 1 - r, d - k + 2} \) .
6. For \( n \) -manifolds \( {M}_{1} \) and \( {M}_{2} \), show that \( {\pi }_{1}\left( {{M}_{1}\# {M}_{2}}\right) \cong {\pi }_{1}\left( {M}_{1}\right) * {\pi }_{1}\left( {M}_{2}\right) \) when \( n > 2 \) . Discuss the cases \( n \leq 2 \) .
7. For \( g \geq 3 \) show that \( {T}_{g} \) is a \( \left( {g - 1}\right) \) to 1 covering space of \( {T}_{2} \) .
---
\( {}^{5} \) One expresses \( {T}_{g + 1} = {T}_{g}\# {T}_{1} \) [resp. \( {U}_{h + 1} = {U}_{h}\# {U}_{1} \) ] by saying \( {T}_{g + 1} \) is obtained from \( {T}_{g} \) by attaching a handle [resp. \( {U}_{h + 1} \) is obtained from \( {U}_{h} \) by attaching a crosscap].
\( {}^{6} \) By the Schoenfliesz Theorem (see [138]) every 2-ball in a surface is unknotted.
---
8. Show that \( {T}_{g} \) is a 2 to 1 covering space of \( {U}_{g + 1} \) .
9. By considering the universal cover of \( K\left( 2\right) \) (see Example 5.1.8) show that the universal cover of \( {T}_{2} \) is homeomorphic to \( {\mathbb{R}}^{2} \) . Deduce that the universal covers of all the path connected closed surfaces except \( {S}^{2} \) and \( \mathbb{R}{P}^{2} \) are homeomorphic to \( {\mathbb{R}}^{2} \) . (In the terminology of Ch. 7, all such surfaces are aspherical.)
10. Let \( G \) act freely and cocompactly on a path connected orientable open surface \( S \) . Prove that \( {H}_{1}\left( {S;\mathbb{Z}}\right) \) is finitely generated as a \( \mathbb{Z}G \) -module if \( G \) is finitely presented, and it is not finitely generated as a \( \mathbb{Z}G \) -module if \( G \) is finitely generated but does not have type \( F{P}_{2} \) over \( \mathbb{Z} \) .
## 5.2 Simplicial complexes and combinatorial manifolds
This is an exposition of simplicial complexes, their underlying polyhedra, joins, and combinatorial manifolds. It is intended to be both an exposition and a place to refer back to for definitions as needed.
An abstract simplicial complex, \( K \), consists of a set \( {V}_{K} \) of vertices and a set \( {S}_{K} \) of finite non-empty subsets of \( {V}_{K} \) called simplexes; these satisfy: (i) every one-element subset of \( {V}_{K} \) is a simplex, and (ii) every non-empty subset of a simplex is a simplex. An \( n \) -simplex of \( K \) is a simplex containing \( \left( {n + 1}\right) \) - vertices (in which case \( n \) is the dimension of the simplex). The empty abstract simplicial complex, denoted by \( \varnothing \), has \( {V}_{\varnothing } = {S}_{\varnothing } = \varnothing \) . We say \( K \) is finite if \( {V}_{K} \) is finite (in which case \( {S}_{K} \) is finite), \( K \) is countable if \( {V}_{K} \) is countable (in which case \( {S}_{K} \) is countable), and \( K \) is locally finite if each vertex lies in only finitely many simplexes. The dimension of \( K \) is the supremum of the dimensions of its simplexes.
If \( K \) and \( L \) are abstract simplicial complexes, a simplicial map \( \phi : K \rightarrow L \) is a function \( {V}_{K} \rightarrow {V}_{L} \) taking simplexes of \( K \) onto simplexes of \( L \) . If \( \phi \) has a two-sided inverse which is simplicial, then \( \phi \) is a simplicial isomorphism.
One associates a CW complex with the abstract simplicial complex \( K \) as follows. Let \( W \) be the real vector space \( \mathop{\prod }\limits_{{v \in {V}_{K}}}\mathbb{R} \) ; i.e., the cartesian product of copies of \( \mathbb{R} \) indexed by \( {V}_{K} \), with the usual coordinatewise addition and scalar multiplication. Topologize every finite-dimensional linear subspace \( U \) of \( W \) by giving it the appropriate euclidean topology. Give \( W \) the weak topology with respect to this family of subspaces (which is suitable in the sense of Sect. 1.1 for defining a weak topology). This is called the finite topology \( {}^{7} \) on \( W \) . Abusing notation, let \( v \) also denote the point of \( W \) having entry 1 in the \( v \) -coordinate, and all other entries 0 . For each simplex \( \sigma = \left\{ {{v}_{0},\cdots ,{v}_{n}}\right\} \) of \( K \), let \( \left| \sigma \right| \) be the (closed) convex hull in \( W \) of \( \left\{ {{v}_{0},\cdots ,{v}_{n}}\right\} \) . Let \( {\left| K\right| }^{n} = \bigcup \{ \left| \sigma \right| \mid \sigma \) is an \( n \) -simplex of \( K\} \) . Let \( \left| K\right| = \mathop{\bigcup }\limits_{{n \geq 0}}{\left| K\right| }^{n} \) . Then \( \left( {\left| K\right| ,\left\{ {\left| K\right| }^{n}\right\} }\right) \), with the topology
---
\( {}^{7} \) The finite topology does not make \( W \) a topological vector space, but it makes each finite-dimensional linear subspace a topological vector space.
---
inherited from \( W \), is a CW complex. The details are an exercise; see also [51, pp. 171-172]. This \( \left| K\right| \) is the geometric realization of \( K \) .
If \( \phi : K \rightarrow L \) is a simplicial map, there is an associated map \( \left| \phi \right| : \left| K\right| \rightarrow \left| L\right| \) which maps the vertex of \( \left| K\right| \) corresponding to \( v \in {V}_{K} \) to the vertex of \( \left| L\right| \) corresponding to \( \phi \left( v\right) \in {V}_{L} \), and is affine on each \( \left| \sigma \right| \) . Clearly \( \left| \phi \right| \) is continuous, and if \( \phi \) is a simplicial isomorphism, \( \left| \phi \right| \) is a homeomorphism.
The notations \( \left| \sigma \right| \) and \( \left| K\right| \) are sometimes used in a slightly different way. Assume (i) \( {V}_{K} \) is a subset of \( {\mathbb{R}}^{N} \) such that the vertices of each simplex of \( K \) are affinely independent. With \( \sigma \) as above, define \( \left| \sigma \right| \) to be the (closed) convex hull of \( \sigma \) in \( {\mathbb{R}}^{N} \) . Write \( \left| \sigma \right| \) for \( \left\{ {\mathop{\sum }\limits_{{i = 0}}^{n}{t}_{i}{v}_{i} \mid 0 < {t}_{i} < 1}\right. \) and \( \left. {\mathop{\sum }\limits_{{i = 0}}^{n}{t}_{i} = 1}\right\} \), the open convex hull of \( \sigma \) . Assume (ii) that whenever \( \sigma \neq \tau \in {S}_{K},\left| \overset{ \circ }{\sigma }\right| \cap \left| \overset{ \circ }{\tau }\right| = \varnothing \) . Define \( \left| K\right| = \bigcup \left\{ {\left| \sigma \right| \mid \sigma \in {S}_{K}}\right\} \) and \( {\left| K\right| }^{n} = \bigcup \{ \left| \sigma \right| \mid \sigma \) is a \( k \) -simplex of \( K \) and \( k \leq n\} \) . Then \( \left| K\right| \) (with topology inherited from \( {\mathbb{R}}^{N} \) ) and \( \left\{ {\left| \overset{ \circ }{\sigma }\right| \mid \sigma \in {S}_{K}}\right\} \) satisfy all but one of the requirements in Proposition 1.2.14 for \( \left( {\left| K\right| ,\left\{ {\left| K\right| }^{n}\right\} }\right) \) to be a CW complex: the sole problem is that the topology which \( \left| K\right| \) inherits from \( {\mathbb{R}}^{N} \) might not agree with the weak topology with respect to \( \left\{ {\left| \sigma \right| \mid \sigma \in {S}_{K}}\right\} \) . Assume (iii) that \( \left\{ {\left| \sigma \right| \mid \sigma \in {S}_{K}}\right\} \) is a locally finite family of subsets of the space \( \left| K\right| \) (where \( \left| K\right| \) has the inherited topology). If \( K \) satisfies these assumptions (i),(ii) and (iii), we call \( K \) a simplicial complex in \( {\mathbb{R}}^{N} \), and we call \( \left| K\right| \) its underlying polyhedron. The cell \( \left| \sigma \right| \) of \( \left| K\right| \) is often called a simplex of \( \left| K\right| \) ; context prevents this double use of the word from causing problems. Note that \( \left| K\right| \) might not be closed in \( {\mathbb{R}}^{N} \) ; it is closed iff the set of simplexes \( \left| \sigma \right| \) is a locally finite family of subsets of \( {\mathbb{R}}^{N} \) (rather than of \( \left| K\right| \) ).
A space \( Z \) is triangulable if there is an abstract simplicial complex \( K \) such that \( Z \) is homeomorphic to \( \left| K\right| \), and \( K \) is called \( {}^{8} \) a triangulation of \( Z \) .
Example 5.2.1. The half-open interval \( (0,1\rbrack \) in \( \mathbb{R} \) is triangulable: take \( {V}_{K} = \) \( \left\{ {\frac{1}{n} \mid n \in \mathbb{N}}\right\} \) and \( {S}_{K} \) to be the set of pairs \( \left\{ {\frac{1}{n},\frac{1}{n + 1}}\right\} \) together with \( {V}_{K} \) . But the subspace \( {V}_{K} \cup \{ 0\} \) of \( \mathbb{R} \) is not triangulable.
Proposition 5.2.2. If \( K \) is a simplicial complex in \( {\mathbb{R}}^{N} \), the weak topology on \( \left| K\right| \) with respect to \( \left\{ {\left| \sigma \right|
|
Example 5.1.9. If \( M \) is a compact surface with non-empty boundary then each path component of \( \partial M \) is a circle, and if \( C \) is one of those circles then the space obtained from \( M \) by attaching a 2-cell using as attaching map an embedding \( {S}^{1} \rightarrow M \) whose image is \( C \) is again a surface. Thus, by attaching finitely many 2-cells in this way \( M \) becomes a closed surface, i.e., becomes a \( {T}_{g} \) or a \( {U}_{h} \). Let \( {T}_{g, d} \) and \( {U}_{h, d} \) denote the surfaces obtained by removing the interiors of \( d \) unknotted \( {}^{6} \) 2-balls. Then every compact path connected surface whose boundary consists of \( d \) circles is homeomorphic to \( {T}_{g, d} \) or to \( {U}_{h, d} \). When \( d > 0 \), \( {T}_{g, d} \) and \( {U}_{h, d} \) contain graphs as strong deformation retracts; hence they are aspherical with free fundamental groups, by 3.1.9 and 3.1.16.
|
null
|
Proposition 4.1. Let \( {\left( {A}_{i}\right) }_{i \in I} \) be universal algebras of type \( T \) . A universal algebra \( A \) of type \( T \) is isomorphic to a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \) if and only if there exist surjective homomorphisms \( {\varphi }_{i} : A \rightarrow {A}_{i} \) such that \( \mathop{\bigcap }\limits_{{i \in I}}\ker {\varphi }_{i} \) is the equality on \( A \) .
Here, \( \mathop{\bigcap }\limits_{{i \in I}}\ker {\varphi }_{i} \) is the equality on \( A \) if and only if \( {\varphi }_{i}\left( x\right) = {\varphi }_{i}\left( y\right) \) for all \( i \in I \) implies \( x = y \), if and only if \( x \neq y \) in \( A \) implies \( {\varphi }_{i}\left( x\right) \neq {\varphi }_{i}\left( y\right) \) for some \( i \in I \) . Homomorphisms with this property are said to separate the elements of \( A \) .
Proof. Let \( P \) be a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \) . The inclusion homomorphism \( \iota : P \rightarrow \mathop{\prod }\limits_{{i \in I}}{A}_{i} \) and projections \( {\pi }_{j} : \mathop{\prod }\limits_{{i \in I}}{A}_{i} \rightarrow {A}_{j} \) yield surjective homomorphisms \( {\rho }_{i} = {\pi }_{i} \circ \iota : P \rightarrow {A}_{i} \) that separate the elements of \( P \), since elements of the product that have the same components must be equal. If now \( \theta : A \rightarrow P \) is an isomorphism, then the homomorphisms \( {\varphi }_{i} = {\rho }_{i} \circ \theta \) are surjective and separate the elements of \( A \) .
Conversely, assume that there exist surjective homomorphisms \( {\varphi }_{i} : A \rightarrow {A}_{i} \) that separate the elements of \( A \) . Then \( \varphi : x \mapsto {\left( {\varphi }_{i}\left( x\right) \right) }_{i \in I} \) is an injective homomorphism of \( A \) into \( \mathop{\prod }\limits_{{i \in I}}{A}_{i} \) . Hence \( A \cong \operatorname{Im}\varphi \) ; moreover, \( \operatorname{Im}\varphi \) is a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \), since \( {\pi }_{i}\left( {\operatorname{Im}\varphi }\right) = {\varphi }_{i}\left( A\right) = {A}_{i} \) for all \( i.▱ \)
Direct products are associative: if \( I = \mathop{\bigcup }\limits_{{j \in J}}{I}_{j} \) is a partition of \( I \), then \( \mathop{\prod }\limits_{{i \in I}}{A}_{i} \cong \mathop{\prod }\limits_{{j \in J}}\left( {\mathop{\prod }\limits_{{i \in {I}_{j}}}{A}_{i}}\right) \) . So are subdirect products, as readers will deduce from Proposition 4.1:
Proposition 4.2. Let \( {\left( {A}_{i}\right) }_{i \in I} \) be universal algebras of type \( T \) and let \( I = \) \( \mathop{\bigcup }\limits_{{j \in J}}{I}_{j} \) be a partition of \( I \) . An algebra \( A \) of type \( T \) is isomorphic to a sub-direct product of \( {\left( {A}_{i}\right) }_{i \in I} \) if and only if \( A \) is isomorphic to a subdirect product of algebras \( {\left( {P}_{j}\right) }_{j \in J} \) in which each \( {P}_{j} \) is a subdirect product of \( {\left( {A}_{i}\right) }_{i \in {I}_{j}} \) .
Subdirect decompositions. A subdirect decomposition of \( A \) into algebras \( {\left( {A}_{i}\right) }_{i \in I} \) of the same type is an isomorphism of \( A \) onto a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \) . By 4.1, subdirect decompositions of \( A \) can be set up from within \( A \) from suitable families of congruences on \( A \) . They are inherited by every variety \( \mathcal{V} \) : when \( A \) has a subdirect decomposition into algebras \( {\left( {A}_{i}\right) }_{i \in I} \), then \( A \in \mathcal{V} \) if and only if \( {A}_{i} \in \mathcal{V} \) for all \( i \), by 3.1 .
Subdirect decompositions of \( A \) give loose descriptions of \( A \) in terms of presumably simpler components \( {\left( {A}_{i}\right) }_{i \in I} \) . The simplest possible components are called subdirectly irreducible:
Definition. A universal algebra \( A \) is subdirectly irreducible when \( A \) has more than one element and, whenever \( A \) is isomorphic to a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \) , at least one of the projections \( A \rightarrow {A}_{i} \) is an isomorphism.
Proposition 4.3. A universal algebra \( A \) is subdirectly irreducible if and only if \( A \) has more than one element and the equality on \( A \) is not the intersection of congruences on \( A \) that are different from the equality.
The proof is an exercise in further deduction from Proposition 4.1.
Theorem 4.4 (Birkhoff [1944]). Every nonempty universal algebra is isomorphic to a subdirect product of subdirectly irreducible universal algebras. In any variety \( \mathcal{V} \), every nonempty universal algebra \( A \in \mathcal{V} \) is isomorphic to a subdirect product of subdirectly irreducible universal algebras \( {A}_{i} \in \mathcal{V} \) .
Proof. Let \( A \) be a nonempty algebra of type \( T \) . By 1.5, the union of a chain of congruences on \( A \) is a congruence on \( A \) . Let \( a, b \in A, a \neq b \) of \( A \) . If \( {\left( {\mathcal{C}}_{i}\right) }_{i \in I} \) is a chain of congruences on \( A \), none of which contains the pair \( \left( {a, b}\right) \) , then the union \( \mathcal{C} = \mathop{\bigcup }\limits_{{i \in I}}{\mathcal{C}}_{i} \) is a congruence on \( A \) that does not contain the pair \( \left( {a, b}\right) \) . By Zorn’s lemma, there is a congruence \( {\mathcal{M}}_{a, b} \) on \( A \) that is maximal such that \( \left( {a, b}\right) \notin {\mathcal{M}}_{a, b} \) . The intersection \( \mathop{\bigcap }\limits_{{a, b \in A, a \neq b}}{\mathcal{M}}_{a, b} \) cannot contain any pair \( \left( {a, b}\right) \) with \( a \neq b \) and is the equality on \( A \) . By 4.1, \( A \) is isomorphic to a subdirect product of the quotient algebras \( A/{\mathcal{M}}_{a, b} \) .
The algebra \( A/{\mathcal{M}}_{a, b} \) has at least two elements, since \( {\mathcal{M}}_{a, b} \) does not contain the pair \( \left( {a, b}\right) \) . Let \( {\left( {\mathrm{C}}_{i}\right) }_{i \in I} \) be congruences on \( A/{\mathcal{M}}_{a, b} \), none of which is the equality. Under the projection \( \pi : A \rightarrow A/{\mathcal{M}}_{a, b} \), the inverse image \( {\pi }^{-1}\left( {\mathcal{C}}_{i}\right) \) is, by 1.9, a congruence on \( A \), which properly contains \( \ker \pi = {\mathcal{M}}_{a, b} \), hence contains the pair \( \left( {a, b}\right) \), by the maximality of \( {\mathcal{M}}_{a, b} \) . Hence \( \left( {\pi \left( a\right) ,\pi \left( b\right) }\right) \in {\mathcal{C}}_{i} \) for every \( i \), and \( \mathop{\bigcap }\limits_{{i \in I}}{\mathcal{C}}_{i} \) is not the equality on \( A/{\mathcal{M}}_{a, b} \) . Thus \( A/{\mathcal{M}}_{a, b} \) is subdirectly irreducible, by 4.3. \( ▱ \)
Abelian groups. Abelian groups can be used to illustrate these results.
Congruences on an abelian group are induced by its subgroups. Hence an abelian group \( A \) (written additively) is isomorphic to a subdirect product of abelian groups \( {\left( {A}_{i}\right) }_{i \in I} \) if and only if there exist surjective homomorphisms \( {\varphi }_{i} : A \rightarrow {A}_{i} \) such that \( \mathop{\bigcap }\limits_{{i \in I}}\operatorname{Ker}{\varphi }_{i} = 0 \) ; an abelian group \( A \) is subdirectly irreducible if and only if \( A \) has more than one element and 0 is not the intersection of nonzero subgroups of \( A \) .
By Theorem 4.4, every abelian group is isomorphic to a subdirect product of subdirectly irreducible abelian groups. The latter are readily determined.
Proposition 4.5. An abelian group is subdirectly irreducible if and only if it is isomorphic to \( {\mathbb{Z}}_{{p}^{\infty }} \) or to \( {\mathbb{Z}}_{{p}^{n}} \) for some \( n > 0 \) .
Proof. Readers will verify that \( {\mathbb{Z}}_{{p}^{\infty }} \) and \( {\mathbb{Z}}_{{p}^{n}} \) (where \( n > 0 \) ) are subdirectly irreducible. Conversely, every abelian group \( A \) can, by X.4.9 and X.4.10, be embedded into a direct product of copies of \( \mathbb{Q} \) and \( {\mathbb{Z}}_{{p}^{\infty }} \) for various primes \( p \) . Hence \( A \) is isomorphic to a subdirect product of subgroups of \( \mathbb{Q} \) and \( {\mathbb{Z}}_{{p}^{\infty }} \) .
Now, \( \mathbb{Q} \) has subgroups \( \mathbb{Z},2\mathbb{Z},\ldots ,{2}^{k}\mathbb{Z},\ldots \), whose intersection is 0 ; since \( \mathbb{Q}/{2}^{k}\mathbb{Z} \cong \mathbb{Q}/\mathbb{Z},\mathbb{Q} \) is isomorphic to a subdirect product of subgroups of \( \mathbb{Q}/\mathbb{Z} \) . Readers will verify that \( \mathbb{Q}/\mathbb{Z} \) is isomorphic to a direct sum of \( {\mathbb{Z}}_{p}\infty \) ’s (for various primes \( p \) ). By 4.2, \( \mathbb{Q} \) is isomorphic to a subdirect product of subgroups of \( {\mathbb{Z}}_{{p}^{\infty }} \) (for various primes \( p \) ). Then every abelian group \( A \) is isomorphic to a subdirect product of subgroups of \( {\mathbb{Z}}_{{p}^{\infty }} \) (for various primes \( p \) ). If \( A \) is subdirectly irreducible, then \( A \) is isomorphic to a subgroup of some \( {\mathbb{Z}}_{{p}^{\infty }} \) .
Distributive lattices. Birkhoff's earlier theorem, XIV.4.8, states that every distributive lattice is isomorphic to a sublattice of the lattice of all subsets \( {2}^{X} \) of some set \( X \) . We give another proof of this result, using subdirect products.
Since distributive lattices constitute a variety, every distributive lattice is isomorphic to a subdirect product of subdirectly irreducible distributive lattices, by 4.4. One such lattice is the two-element lattice \( {L}_{2} = \{ 0,1\} \), which has only two congruences and is subdirectly irreducible by 4.3 .
Proposition 4.6. Every distributive lattice is isomorphic to a subdirect product of two-element lattices. A distributive lattice is subdirectly irreducible if and only if it has just two elements.
Proof. To each prime ideal \( P \neq \varnothing, L \) of a distributive lattice \( L \) there corresponds a lattice homomorphism \( {\varphi }_{P} \) of \
|
Proposition 4.1. Let \( {\left( {A}_{i}\right) }_{i \in I} \) be universal algebras of type \( T \) . A universal algebra \( A \) of type \( T \) is isomorphic to a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \) if and only if there exist surjective homomorphisms \( {\varphi }_{i} : A \rightarrow {A}_{i} \) such that \( \mathop{\bigcap }\limits_{{i \in I}}\ker {\varphi }_{i} \) is the equality on \( A \) .
|
Proof. Let \( P \) be a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \) . The inclusion homomorphism \( \iota : P \rightarrow \mathop{\prod }\limits_{{i \in I}}{A}_{i} \) and projections \( {\pi }_{j} : \mathop{\prod }\limits_{{i \in I}}{A}_{i} \rightarrow {A}_{j} \) yield surjective homomorphisms \( {\rho }_{i} = {\pi }_{i} \circ \iota : P \rightarrow {A}_{i} \) that separate the elements of \( P \), since elements of the product that have the same components must be equal. If now \( \theta : A \rightarrow P \) is an isomorphism, then the homomorphisms \( {\varphi }_{i} = {\rho }_{i} \circ \theta \) are surjective and separate the elements of \( A \) .
Conversely, assume that there exist surjective homomorphisms \( {\varphi }_{i} : A \rightarrow {A}_{i} \) that separate the elements of \( A \) . Then \( \varphi : x \mapsto {\left( {\varphi }_{i}\left( x\right) \right) }_{i \in I} \) is an injective homomorphism of \( A \) into \( \mathop{\prod }\limits_{{i \in I}}{A}_{i} \) . Hence \( A \cong \operatorname{Im}\varphi \) ; moreover, \( \operatorname{Im}\varphi \) is a subdirect product of \( {\left( {A}_{i}\right) }_{i \in I} \), since \( {\pi }_{i}\left( {\operatorname{Im}\varphi }\right) = {\varphi }_{i}\left( A\right) = {A}_{i} \) for all \( i.▱ \)
|
Proposition 4.2 If \( w \in {\Lambda }^{k}\left( E\right) \) and \( z \in {\Lambda }^{\ell }\left( E\right) \), then
\[
z \land w = {\left( -1\right) }^{k\ell }w \land z.
\]
## 4.2.3.2 A Commutative Subalgebra
The sum
\[
{\Lambda }_{\text{even }}\left( E\right) = K \oplus {\Lambda }^{2}\left( E\right) \oplus {\Lambda }^{4}\left( E\right) \oplus \cdots
\]
is obviously a subalgebra of \( \Lambda \left( E\right) \), of dimension \( {2}^{n - 1} \) because of
\[
\mathop{\sum }\limits_{{0 \leq k \leq n/2}}\left( \begin{matrix} n \\ {2k} \end{matrix}\right) = {2}^{n - 1}
\]
Because of Proposition 4.2, it is actually commutative.
Corollary 4.2 If \( w, z \in {\Lambda }_{\text{even }}\left( E\right) \), then \( w \land z = z \land w \in {\Lambda }_{\text{even }}\left( E\right) \) .
## 4.3 Tensorization of Linear Maps
## 4.3.1 Tensor Product of Linear Maps
Let \( {E}_{0},{E}_{1},{F}_{0},{F}_{1} \) be vector spaces over \( K \) . If \( {u}_{j} \in \mathcal{L}\left( {{E}_{j};{F}_{j}}\right) \), Theorem 4.1 allows us to define a linear map \( {u}_{0} \otimes {u}_{1} \in \mathcal{L}\left( {{E}_{0} \otimes {E}_{1};{F}_{0} \otimes {F}_{1}}\right) \), satisfying
\[
\left( {{u}_{0} \otimes {u}_{1}}\right) \left( {x \otimes y}\right) = {u}_{0}\left( x\right) \otimes {u}_{1}\left( y\right)
\]
A similar construction is available with an arbitrary number of linear maps \( {u}_{j} : {E}_{j} \rightarrow \) \( {F}_{j} \) .
Let us choose bases \( \left\{ {{\mathbf{e}}^{01},\ldots ,{\mathbf{e}}^{0m}}\right\} \) of \( {E}_{0},\left\{ {{\mathbf{e}}^{11},\ldots ,{\mathbf{e}}^{1q}}\right\} \) of \( {E}_{1},\left\{ {{\mathbf{f}}^{01},\ldots ,{\mathbf{f}}^{0n}}\right\} \) of \( {F}_{0},\left\{ {{\mathbf{f}}^{11},\ldots ,{\mathbf{f}}^{1p}}\right\} \) of \( {F}_{1} \) . Let \( A \) and \( B \) be the respective matrices of \( {u}_{0} \) and \( {u}_{1} \) in these bases. Then
\[
\left( {{u}_{0} \otimes {u}_{1}}\right) \left( {{\mathbf{e}}^{0i} \otimes {\mathbf{e}}^{1j}}\right) = \left( {\mathop{\sum }\limits_{k}{a}_{ki}{\mathbf{f}}^{0k}}\right) \otimes \left( {\mathop{\sum }\limits_{\ell }{b}_{\ell j}{\mathbf{f}}^{1\ell }}\right)
\]
\[
= \mathop{\sum }\limits_{{k,\ell }}{a}_{ki}{b}_{\ell j}{\mathbf{f}}^{0k} \otimes {\mathbf{f}}^{1\ell }.
\]
This shows that the \( \left( {\left( {k, l}\right) ,\left( {i, j}\right) }\right) \) -entry of the matrix of \( {u}_{0} \otimes {u}_{1} \) in the tensor bases is the product \( {a}_{ki}{b}_{\ell j} \) . If we arrange the bases \( {\left( {\mathbf{e}}^{0i} \otimes {\mathbf{e}}^{1j}\right) }_{i, j} \) and \( {\left( {\mathbf{f}}^{0k} \otimes {\mathbf{f}}^{1\ell }\right) }_{k,\ell } \) in lexicographic order, then this matrix reads blockwise
\[
\left( \begin{matrix} {a}_{11}B & {a}_{12}B & \ldots & {a}_{1m}B \\ {a}_{21}B & \ddots & & \vdots \\ \vdots & & & \\ {a}_{n1}B & \ldots & & {a}_{nm}B \end{matrix}\right) .
\]
This matrix is called the tensor product of \( A \) and \( B \), denoted \( A \otimes B \) .
## 4.3.2 Exterior Power of an Endomorphism
If \( u \in \mathbf{{End}}\left( E\right) \), then \( u \otimes u \otimes \cdots \otimes u = : {u}^{\otimes k} \) has the property that \( u\left( {L}_{k}\right) \subset {L}_{k} \) . Therefore there exists a unique linear map \( {\mathbf{\Lambda }}^{k}\left( u\right) \) such that
\[
u\left( {{x}^{1} \land \cdots \land {x}^{k}}\right) = u\left( {x}^{1}\right) \land \cdots \land u\left( {x}^{k}\right) .
\]
Proposition 4.3 If \( A \) is the matrix of \( u \in \mathbf{{End}}\left( E\right) \) in a basis \( \left\{ {{\mathbf{e}}^{1},\ldots ,{\mathbf{e}}^{n}}\right\} \), then the entries of the matrix \( {A}^{\left( k\right) } \) of \( {\Lambda }^{k}\left( u\right) \) in the basis of vectors \( {\mathbf{e}}^{{j}_{1}} \land \cdots \land {\mathbf{e}}^{{j}_{n}} \) are the \( k \times k \) minors of \( A \) .
Proof. This is essentially the same line as in the proof of the Cauchy-Binet formula (Proposition 3.4).
Corollary 4.3 If \( \dim E = n \) and \( u \in \mathbf{{End}}\left( E\right) \), then \( {\Lambda }^{n}\left( u\right) \) is multiplication by \( \det u \) .
## 4.4 A Polynomial Identity in \( {\mathbf{M}}_{\mathbf{n}}\left( \mathbf{K}\right) \)
We already know a polynomial identity in \( {\mathbf{M}}_{n}\left( K\right) \), namely the Cayley-Hamilton theorem: \( {P}_{A}\left( A\right) = {0}_{n} \) . However, it is a bit complicated because the matrix is involved both as the argument of the polynomial and in its coefficients. We prove here a remarkable result, where a multilinear application vanishes identically when the arguments are arbitrary \( n \times n \) matrices. To begin with, we introduce some special polynomials in noncommutative indeterminates.
## 4.4.1 The Standard Noncommutative Polynomial
Noncommutative polynomials in indeterminates \( {X}_{1},\ldots ,{X}_{\ell } \) are linear combinations of words written in the alphabet \( \left\{ {{X}_{1},\ldots ,{X}_{\ell }}\right\} \) . The important rule is that in a word, you are not allowed to permute two distinct letters: \( {X}_{i}{X}_{j} \neq {X}_{j}{X}_{i} \) if \( j \neq i \), contrary to what occurs in ordinary polynomials.
The standard polynomial \( {\mathcal{S}}_{\ell } \) in noncommutative indeterminates \( {X}_{1},\ldots ,{X}_{\ell } \) is defined by
\[
{\mathcal{S}}_{\ell }\left( {{X}_{1},\ldots ,{X}_{\ell }}\right) \mathrel{\text{:=}} \sum \varepsilon \left( \sigma \right) {X}_{\sigma \left( 1\right) }\cdots {X}_{\sigma \left( \ell \right) }.
\]
Hereabove, the sum runs over the permutations of \( \{ 1,\ldots ,\ell \} \), and \( \varepsilon \left( \sigma \right) \) denotes the signature of \( \sigma \) . For instance, \( {\mathcal{S}}_{2}\left( {X, Y}\right) = {XY} - {YX} = \left\lbrack {X, Y}\right\rbrack \) . The standard polynomial is thus a tool for measuring the defect of commutativity in a ring or an algebra: a ring \( R \) is Abelian if \( {\mathcal{S}}_{2} \) vanishes identically over \( R \times R \) .
The following formula is obvious.
Lemma 4. Let \( {A}_{1},\ldots ,{A}_{r} \in {\mathbf{M}}_{n}\left( K\right) \) be given. We form the matrix \( A \in {\mathbf{M}}_{n}\left( {\Lambda \left( {K}^{r}\right) }\right) \sim \) \( {\mathbf{M}}_{n}\left( K\right) { \otimes }_{K}\Lambda \left( {K}^{r}\right) \) by
\[
A = {A}_{1}{\mathbf{e}}^{1} + \cdots + {A}_{r}{\mathbf{e}}^{r}
\]
We emphasize that \( A \) has entries in a noncommutative ring.
Then we have
\[
{A}^{\ell } = \mathop{\sum }\limits_{{{i}_{1} < \cdots < {i}_{\ell }}}{\mathcal{S}}_{\ell }\left( {{A}_{{i}_{1}},\ldots ,{A}_{{i}_{\ell }}}\right) {\mathbf{e}}^{{i}_{1}} \land \cdots \land {\mathbf{e}}^{{i}_{\ell }}.
\]
In particular, we have
\[
{A}^{r} = {\mathcal{S}}_{r}\left( {{A}_{1},\ldots ,{A}_{r}}\right) {\mathbf{e}}^{1} \land \cdots \land {\mathbf{e}}^{r}.
\]
(4.4)
The other important formula generalizes the well-known identity \( \operatorname{Tr}\left\lbrack {A, B}\right\rbrack = 0 \) . To begin with, we easily have
\[
{\mathcal{S}}_{\ell }\left( {{X}_{\sigma \left( 1\right) },\ldots ,{X}_{\sigma \left( \ell \right) }}\right) = \varepsilon \left( \sigma \right) {\mathcal{S}}_{\ell }\left( {{X}_{1},\ldots ,{X}_{\ell }}\right) ,\;\forall \sigma \in {\mathbf{S}}_{\ell }.
\]
Applying this to a cycle, we deduce
\[
{\mathcal{S}}_{\ell }\left( {{X}_{2},\ldots ,{X}_{\ell },{X}_{1}}\right) = {\left( -1\right) }^{\ell - 1}{\mathcal{S}}_{\ell }\left( {{X}_{1},\ldots ,{X}_{\ell }}\right) .
\]
Because \( \operatorname{Tr}\left( {{A}_{2}\cdots {A}_{\ell }{A}_{1}}\right) = \operatorname{Tr}\left( {{A}_{1}\cdots {A}_{\ell }}\right) \), we infer the following.
Lemma 5. If \( \ell \) is even and \( {A}_{1},\ldots ,{A}_{\ell } \in {\mathbf{M}}_{n}\left( R\right) \) ( \( R \) a commutative ring), then
\[
\operatorname{Tr}{\mathcal{S}}_{\ell }\left( {{A}_{1},\ldots ,{A}_{\ell }}\right) = 0
\]
Proof. If \( \ell \) is even, we have
\[
\operatorname{Tr}{\mathcal{S}}_{\ell }\left( {{A}_{1},\ldots ,{A}_{\ell }}\right) = - \operatorname{Tr}{\mathcal{S}}_{\ell }\left( {{A}_{2},\ldots ,{A}_{\ell },{A}_{1}}\right) = - \operatorname{Tr}{\mathcal{S}}_{\ell }\left( {{A}_{1},\ldots ,{A}_{\ell }}\right) ,
\]
the first equality because this is true even before taking the trace, and the last equality because of \( \operatorname{Tr}\left( {AB}\right) = \operatorname{Tr}\left( {BA}\right) \) . If \( {2x} = 0 \) implies \( x = 0 \) in \( R \), we deduce
\[
\operatorname{Tr}{\mathcal{S}}_{\ell }\left( {{A}_{1},\ldots ,{A}_{\ell }}\right) = 0.
\]
For instance, this is true if \( R = \mathbb{C} \) . Because \( \operatorname{Tr}{\mathcal{S}}_{\ell }\left( \cdots \right) \) belongs to \( \mathbb{Z}\left\lbrack {{Y}_{1},\ldots ,{Y}_{\ell {n}^{2}}}\right\rbrack \), it must vanish as a polynomial. Thus the identity is valid in every commutative ring R. \( ▱ \)
## 4.4.2 The Theorem of Amitsur and Levitzki
A beautiful as well as surprising fact is that \( {\mathbf{M}}_{n}\left( K\right) \) does have some amount of commutativity, although it seems at first glance to be a paradigm for noncommutative algebras.
Theorem 4.4 (A. Amitsur and J. Levitzki) The standard polynomial in 2n noncommutative indeterminates vanishes over \( {\mathbf{M}}_{n}\left( K\right) \) : for every \( {A}_{1},\ldots ,{A}_{2n} \in {\mathbf{M}}_{n}\left( K\right) \) , we have
\[
{\mathcal{S}}_{2n}\left( {{A}_{1},\ldots ,{A}_{2n}}\right) = {0}_{n}
\]
(4.5)
## Comments
- This result is accurate in the sense that \( {\mathcal{S}}_{k} \) does not vanish identically over \( n \times n \) matrices. For instance, \( {\mathcal{S}}_{k} \) does not vanish over the \( \left( {{2n} - 1}\right) \) -uplet of matrices \( {\mathbf{e}}^{n} \otimes {\mathbf{e}}^{1},{\mathbf{e}}^{n - 1} \otimes {\mathbf{e}}^{1},\ldots ,{\mathbf{e}}^{1} \otimes {\mathbf{e}}^{1},{\mathbf{e}}^{1} \otimes {\mathbf{e}}^{2},\ldots ,{\mathbf{e}}^{1} \otimes {\mathbf{e}}^{n}. \)
- Actually, it is known that \( {\mathcal{S}}_{2n} \equiv {0}_{n} \) is the only polynomial identity of degree less than
|
Proposition 4.2 If \( w \in {\Lambda }^{k}\left( E\right) \) and \( z \in {\Lambda }^{\ell }\left( E\right) \), then
\[
z \land w = {\left( -1\right) }^{k\ell }w \land z.
\]
|
null
|
Theorem 2.7 Gaussian elimination for the simultaneous solution of an \( n \times n \) system for \( r \) different right-hand sides requires a total of
\[
\frac{{n}^{3}}{3} + r{n}^{2} - \frac{n}{3}
\]
multiplications.
The computational cost, counting only the multiplications, in Gaussian elimination is \( {n}^{3}/3 + O\left( {n}^{2}\right) \) . It is left to the reader to show that the number of additions is also \( {n}^{3}/3 + O\left( {n}^{2}\right) \) (see Problem 2.7). Doubling the number of unknowns increases the computation time by a factor of eight. Assuming \( {1\mu }\sec = {10}^{-6}\sec \) per addition and multiplication, i.e., on a computer with one million floating point operations per second, the solution of a system with \( n = {10}^{3} \) requires approximately ten minutes, and with \( n = {10}^{4} \) it requires approximately six days. This illustrates dramatically that for the solution of large linear systems iterative methods, which we will study in Chapter 4, are better suited than direct methods. Row or column pivoting leads to an additional cost proportional to \( {n}^{2} \), whereas complete pivoting adds costs proportional to \( {n}^{3} \) . For the latter reason, complete pivoting is used only rarely in practical computations.
The Gaussian algorithm also allows the computation of the determinant and the inverse of a matrix \( A \) . The determinant \( \det A \) is simply given by the product of the diagonal elements in the triangular matrix obtained through the elimination procedure. If the determinant is computed using expansions by submatrices, then the operational count is \( n \) ! multiplications, as compared to \( {n}^{3}/3 \) for Gaussian elimination. This illustrates why Cramer’s rule for the solution of linear systems is only a theoretical mathematical tool and not a tool for practical computations.
The inverse of a matrix is obtained by solving the linear system simultaneously for the \( n \) right-hand sides given by the columns of the identity matrix, i.e., by solving the \( n \) systems
\[
A{x}_{i} = {e}_{i},\;i = 1,\ldots, n
\]
where \( {e}_{i} \) is the \( i \) th column of the identity matrix. Then the \( n \) solutions \( {x}_{1},\ldots ,{x}_{n} \) will provide the columns of the inverse matrix \( {A}^{-1} \) . We would like to stress that one does not want to solve a system \( {Ax} = y \) by first computing \( {A}^{-1} \) and then evaluating \( x = {A}^{-1}y \), since this generally leads to considerably higher computational costs.
The Gauss-Jordan method is an elimination algorithm that in each step eliminates the unknown both above and below the diagonal. The complete elimination procedure transforms the system equivalently into a diagonal system. The multiplication count shows a computational cost of order \( {n}^{3}/2 + O\left( {n}^{2}\right) \), i.e., an increase of 50 percent over Gaussian elimination. Hence, the Gauss-Jordan method is rarely used in applications. For details we refer to \( \left\lbrack {{26},{27}}\right\rbrack \) .
## 2.3 LR Decomposition
In the sequel we will indicate how Gaussian elimination provides an \( {LR} \) decomposition (or factorization) of a given matrix.
Definition 2.8 A factorization of a matrix \( A \) into a product
\[
A = {LR}
\]
of a lower (left) triangular matrix \( L \) and an upper (right) triangular matrix \( R \) is called an LR decomposition of \( A \) .
A matrix \( A = \left( {a}_{jk}\right) \) is called lower triangular or left triangular if \( {a}_{jk} = 0 \) for \( j < k \) ; it is called upper triangular or right triangular if \( {a}_{jk} = 0 \) for \( j > k \) . The product of two lower (upper) triangular matrices again is lower (upper) triangular, lower (upper) triangular matrices with nonvanishing diagonal elements are nonsingular, and the inverse matrix of a lower (upper) triangular matrix again is lower (upper) triangular (see Problem 2.14).
Theorem 2.9 For a nonsingular matrix \( A \), Gaussian elimination (without reordering rows and columns) yields an LR decomposition.
Proof. In the first elimination step we multiply the first equation by \( {a}_{j1}/{a}_{11} \) and subtract the result from the \( j \) th equation; i.e., the matrix \( {A}_{1} = A \) is multiplied from the left by the lower triangular matrix
\[
{L}_{1} = \left( \begin{array}{rrrr} 1 & & & \\ - \frac{{a}_{21}}{{a}_{11}} & 1 & & \\ \cdot & \cdot & \cdot & \cdot \\ - \frac{{a}_{n1}}{{a}_{11}} & & & 1 \end{array}\right) .
\]
The resulting matrix \( {A}_{2} = {L}_{1}{A}_{1} \) is of the form
\[
{A}_{2} = \left( \begin{array}{rr} {a}_{11} & * \\ 0 & {\widetilde{A}}_{n - 1} \end{array}\right)
\]
where \( {\widetilde{A}}_{n - 1} \) is an \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix. In the second step the same procedure is repeated for the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {\widetilde{A}}_{n - 1} \) . The corresponding \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) elimination matrix is completed as an \( n \times n \) triangular matrix \( {L}_{2} \) by setting the diagonal element in the first row equal to one. In this way, \( n - 1 \) elimination steps lead to
\[
{L}_{n - 1}\cdots {L}_{1}A = R
\]
with nonsingular lower triangular matrices \( {L}_{1},\ldots ,{L}_{n - 1} \) and an upper triangular matrix \( R \) . From this we find
\[
A = {LR}
\]
where \( L \) denotes the inverse of the product \( {L}_{n - 1}\cdots {L}_{1} \) .
We wish to point out that not every nonsingular matrix allows an LR decomposition. For example,
\[
\left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right)
\]
has no LR decomposition. However, since Gaussian elimination with row reordering always works, for each nonsingular matrix \( A \) there exists a permutation matrix \( P \) such that \( {PA} \) has an LR decomposition (see Problem 2.16). A permutation matrix is a matrix of the form \( P = \left( {{e}_{p\left( 1\right) },\ldots ,{e}_{p\left( n\right) }}\right) \) where \( {e}_{1},\ldots ,{e}_{n} \) are the columns of the identity matrix and \( p\left( 1\right) ,\ldots, p\left( n\right) \) is a permuation of \( 1,\ldots, n \) .
Recall that an \( n \times n \) matrix \( A \) is called symmetric if it has real coefficients and \( A = {A}^{T} \) . A symmetric matrix \( A \) is called positive definite if \( {x}^{T}{Ax} > 0 \) for all \( x \in {\mathbb{R}}^{n} \) with \( x \neq 0 \) . Positive definite matrices have positive diagonal elements (see Problem 2.10), and therefore a reordering of rows and columns is not necessary for Gaussian elimination (for pivoting, the largest diagonal element is chosen). It can be shown (see Problem 2.13) that symmetry and positive definiteness are preserved throughout the elimination if diagonal elements are taken as pivot elements. Therefore, for symmetric positive definite matrices the LR decomposition is always possible. If \( A = {LR} \), then we have also \( A = {A}^{T} = {R}^{T}{L}^{T} \), and from Problem 2.15 we can deduce that \( L \) can be normalized such that \( A = L{L}^{T} \) . Such a decomposition is used in the Cholesky method for the solution of linear systems with symmetric positive definite matrices. Because of symmetry, the computational cost for the Cholesky method is \( {n}^{3}/6 + O\left( {n}^{2}\right) \) multiplications and \( {n}^{3}/6 + O\left( {n}^{2}\right) \) additions. For details we refer to \( \left\lbrack {{26},{27}}\right\rbrack \) .
## 2.4 QR Decomposition
We conclude this chapter by describing a second elimination method for linear systems, which leads to a \( {QR} \) decomposition.
Definition 2.10 A factorization of a matrix \( A \) into a product
\[
A = {QR}
\]
of a unitary matrix \( Q \) and an upper (right) triangular matrix \( R \) is called a QR decomposition of \( A \) .
We recall that a matrix \( Q \) is called unitary if
\[
Q{Q}^{ * } = {Q}^{ * }Q = I
\]
The product of two unitary matrices again is unitary.
In terms of the columns of the matrices \( A = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) and \( Q = \left( {{q}_{1},\ldots ,{q}_{n}}\right) \) and the coefficients of \( R = \left( {r}_{jk}\right) \), the QR decomposition \( A = {QR} \) means that
\[
{a}_{k} = \mathop{\sum }\limits_{{i = 1}}^{k}{r}_{ik}{q}_{i},\;k = 1,\ldots, n.
\]
(2.7)
Hence, the vectors \( {a}_{1},\ldots ,{a}_{n} \) of \( {\mathbb{C}}^{n} \) have to be orthonormalized from the left to the right into an orthonormal basis \( {q}_{1},\ldots ,{q}_{n} \) . This, for example, can be achieved by the Gram-Schmidt orthonormalization procedure (see Theorem 3.18). However, since the Gram-Schmidt orthonormalization tends to be numerically unstable, we describe the QR decomposition by Householder matrices.
Definition 2.11 A matrix \( H \) of the form
\[
H = I - {2v}{v}^{ * }
\]
where \( v \) is column vector with \( {v}^{ * }v = 1 \), i.e., a unit vector, is called a Householder matrix.
Remark 2.12 Householder matrices are unitary and satisfy \( H = {H}^{ * } \) .
Proof. We compute
\[
{H}^{ * } = {I}^{ * } - 2{\left( v{v}^{ * }\right) }^{ * } = I - {2v}{v}^{ * } = H
\]
and
\[
H{H}^{ * } = {H}^{ * }H = \left( {I - {2v}{v}^{ * }}\right) \left( {I - {2v}{v}^{ * }}\right) = I - {4v}{v}^{ * } + {4v}{v}^{ * }v{v}^{ * } = I,
\]
where we use that \( {v}^{ * }v = 1 \) .
Geometrically a Householder matrix corresponds to reflection across the plane through the origin orthogonal to \( v \) . To see this we write
\[
x = v{v}^{ * }x + y
\]
with the component \( v{v}^{ * }x \) of \( x \in {\mathbb{C}}^{n} \) in the \( v \) -direction and a component \( y \) orthogonal to \( v \) . Then we obtain
\[
{Hx} = x - {2v}{v}^{ * }x = - v{v}^{ * }x + y
\]
i.e., \( {Hx} \) has the opposite component \( - v{v}^{ * }x \) in the \( v \) -direction and the same component \( y \) orthogonal to \( v \) . Because of this property, Householder matrices are also called elementary reflection matrices.
We now describe the elimination of the unknown \( {x}_{
|
Theorem 2.9 For a nonsingular matrix \( A \), Gaussian elimination (without reordering rows and columns) yields an LR decomposition.
|
In the first elimination step we multiply the first equation by \( {a}_{j1}/{a}_{11} \) and subtract the result from the \( j \) th equation; i.e., the matrix \( {A}_{1} = A \) is multiplied from the left by the lower triangular matrix
\[
{L}_{1} = \left( \begin{array}{rrrr} 1 & & & \\ - \frac{{a}_{21}}{{a}_{11}} & 1 & & \\ \cdot & \cdot & \cdot & \cdot \\ - \frac{{a}_{n1}}{{a}_{11}} & & & 1 \end{array}\right) .
\]
The resulting matrix \( {A}_{2} = {L}_{1}{A}_{1} \) is of the form
\[
{A}_{2} = \left( \begin{array}{rr} {a}_{11} & * \\ 0 & {\widetilde{A}}_{n - 1} \end{array}\right)
\]
where \( {\widetilde{A}}_{n - 1} \) is an \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix. In the second step the same procedure is repeated for the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {\widetilde{A}}_{n - 1} \) . The corresponding \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) elimination matrix is completed as an \( n \times n \) triangular matrix \( {L}_{2} \) by setting the diagonal element in the first row equal to one. In this way, \( n - 1 \) elimination steps lead to
\[
{L}_{n - 1}\cdots {L}_{1}A = R
\]
with nonsingular lower triangular matrices \( {L}_{1},\ldots ,{L}_{n - 1} \) and an upper triangular matrix \( R \) . From this we find
\[
A = {LR}
\]
where \( L \) denotes the inverse of the product \( {L}_{n - 1}\cdots {L}_{1} \) .
|
Lemma 12.2. \( T : X \rightarrow Y \) is closed if and only if the following holds: When \( {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \) is a sequence in \( D\left( T\right) \) with \( {x}_{n} \rightarrow x \) in \( X \) and \( T{x}_{n} \rightarrow y \) in \( Y \), then \( x \in D\left( T\right) \) with \( y = {Tx} \) .
The closed graph theorem (recalled in Appendix B, Theorem B.16) implies that if \( T : X \rightarrow Y \) is closed and has \( D\left( T\right) = X \), then \( T \) is bounded. Thus for closed, densely defined operators, \( D\left( T\right) \neq X \) is equivalent with unboundedness.
Note that a subspace \( G \) of \( X \times Y \) is the graph of a linear operator \( T \) : \( X \rightarrow Y \) if and only if the set \( {\operatorname{pr}}_{1}G \) ,
\[
{\operatorname{pr}}_{1}G = \{ x \in X \mid \exists y \in Y\text{ so that }\{ x, y\} \in G\} ,
\]
has the property that for any \( x \in {\operatorname{pr}}_{1}G \) there is at most one \( y \) so that \( \{ x, y\} \in G \) ; then \( y = {Tx} \) and \( D\left( T\right) = {\operatorname{pr}}_{1}G \) . In view of the linearity we can also formulate the criterion for \( G \) being a graph as follows:
Lemma 12.3. A subspace \( G \) of \( X \times Y \) is a graph if and only if \( \{ 0, y\} \in G \) implies \( y = 0 \) .
All operators in the following are assumed to be linear, this will not in general be repeated.
When \( S \) and \( T \) are operators from \( X \) to \( Y \), and \( D\left( S\right) \subset D\left( T\right) \) with \( {Sx} = \) \( {Tx} \) for \( x \in D\left( S\right) \), we say that \( T \) is an extension of \( S \) and \( S \) is a restriction of \( T \), and we write \( S \subset T \) (or \( T \supset S \) ). One often wants to know whether a given operator \( T \) has a closed extension. If \( T \) is bounded, this always holds, since we can simply take the operator \( \bar{T} \) with graph \( \overline{G\left( T\right) } \) ; here \( \overline{G\left( T\right) } \) is a graph since \( {x}_{n} \rightarrow 0 \) implies \( T{x}_{n} \rightarrow 0 \) . But when \( T \) is unbounded, one cannot be certain that it has a closed extension (cf. Exercise 12.1). But if \( T \) has a closed extension \( {T}_{1} \), then \( G\left( {T}_{1}\right) \) is a closed subspace of \( X \times Y \) containing \( G\left( T\right) \), hence also containing \( \overline{G\left( T\right) } \) . In that case \( \overline{G\left( T\right) } \) is a graph (cf. Lemma 12.3). It is in fact the graph of the smallest closed extension of \( T \) (the one with the smallest domain); we call it the closure of \( T \) and denote it \( \bar{T} \) . (Observe that when \( T \) is unbounded, then \( D\left( \bar{T}\right) \) is a proper subset of \( \overline{D\left( T\right) } \) .)
When \( S \) and \( T \) are operators from \( X \) to \( Y \), the sum \( S + T \) is defined by
\[
D\left( {S + T}\right) = D\left( S\right) \cap D\left( T\right)
\]
(12.3)
\[
\left( {S + T}\right) x = {Sx} + {Tx}\text{ for }x \in D\left( {S + T}\right) ;
\]
and when \( R \) is an operator from \( Y \) to \( Z \), the product (or composition) \( {RT} \) is defined by
\[
D\left( {RT}\right) = \{ x \in D\left( T\right) \mid {Tx} \in D\left( R\right) \} ,
\]
(12.4)
\[
\left( {RT}\right) x = R\left( {Tx}\right) \text{for}x \in D\left( {RT}\right) \text{.}
\]
As shown in Exercise 12.4, \( R\left( {S + T}\right) \) need not be the same as \( {RS} + {RT} \) . Concerning closures of products of operators, see Exercise 12.6. When \( S \) and \( T \) are invertible, one has \( {\left( ST\right) }^{-1} = {T}^{-1}{S}^{-1} \) .
Besides the norm topology we can provide \( D\left( T\right) \) with the so-called graph topology. For Banach spaces it is usually defined by the norm
\[
\parallel x{\parallel }_{D\left( T\right) }^{\prime } = \parallel x{\parallel }_{X} + \parallel {Tx}{\parallel }_{Y}
\]
(12.5)
called the graph norm, and for Hilbert spaces by the equivalent norm (also called the graph norm)
\[
\parallel x{\parallel }_{D\left( T\right) } = {\left( \parallel x{\parallel }_{X}^{2} + \parallel Tx{\parallel }_{Y}^{2}\right) }^{\frac{1}{2}},
\]
(12.6)
which has the associated scalar product
\[
{\left( x, y\right) }_{D\left( T\right) } = {\left( x, y\right) }_{X} + {\left( Tx, Ty\right) }_{Y}.
\]
(These conventions are consistent with (A.10)-(A.12).) The graph norm on \( D\left( T\right) \) is clearly stronger than the \( X \) -norm on \( D\left( T\right) \) ; the norms are equivalent if and only if \( T \) is a bounded operator. Observe that the operator \( T \) is closed if and only if \( D\left( T\right) \) is complete with respect to the graph norm (Exercise 12.3).
Recall that when \( X \) is a Banach space, the dual space \( {X}^{ * } = \mathbf{B}\left( {X,\mathbb{C}}\right) \) consists of the bounded linear functionals \( {x}^{ * } \) on \( X \) ; it is a Banach space with the norm
\[
{\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{X * } = \sup \left\{ {\left| {{x}^{ * }\left( x\right) }\right| \mid x \in X,\parallel x\parallel = 1}\right\} .
\]
When \( T : X \rightarrow Y \) is densely defined, we can define the adjoint operator \( {T}^{ * } : {Y}^{ * } \rightarrow {X}^{ * } \) as follows: The domain \( D\left( {T}^{ * }\right) \) consists of the \( {y}^{ * } \in {Y}^{ * } \) for which the linear functional
\[
x \mapsto {y}^{ * }\left( {Tx}\right) ,\;x \in D\left( T\right) ,
\]
(12.7)
is continuous (from \( X \) to \( \mathbb{C} \) ). This means that there is a constant \( c \) (depending on \( \left. {y}^{ * }\right) \) such that
\[
\left| {{y}^{ * }\left( {Tx}\right) }\right| \leq c\parallel x{\parallel }_{X},\text{ for all }x \in D\left( T\right) .
\]
Since \( D\left( T\right) \) is dense in \( X \), the mapping extends by continuity to \( X \), so there is a uniquely determined \( {x}^{ * } \in {X}^{ * } \) so that
\[
{y}^{ * }\left( {Tx}\right) = {x}^{ * }\left( x\right) \;\text{ for }x \in D\left( T\right) .
\]
(12.8)
Since \( {x}^{ * } \) is determined from \( {y}^{ * } \), we can define the operator \( {T}^{ * } \) from \( {Y}^{ * } \) to \( {X}^{ * } \) by
\[
{T}^{ * }{y}^{ * } = {x}^{ * },\text{ for }{y}^{ * } \in D\left( {T}^{ * }\right) .
\]
(12.9)
Lemma 12.4. Let \( T \) be densely defined. Then there is an adjoint operator \( {T}^{ * } : {Y}^{ * } \rightarrow {X}^{ * } \), uniquely defined by (12.7)-(12.9). Moreover, \( {T}^{ * } \) is closed.
Proof. The definition of \( {T}^{ * } \) is accounted for above; it remains to show the closedness.
Let \( {y}_{n}^{ * } \in D\left( {T}^{ * }\right) \) for \( n \in \mathbb{N} \), with \( {y}_{n}^{ * } \rightarrow {y}^{ * } \) and \( {T}^{ * }{y}_{n}^{ * } \rightarrow {z}^{ * } \) for \( n \rightarrow \infty \) ; then we must show that \( {y}^{ * } \in D\left( {T}^{ * }\right) \) with \( {T}^{ * }{y}^{ * } = {z}^{ * } \) (cf. Lemma 12.2). Now we have for all \( x \in D\left( T\right) \) :
\[
{y}^{ * }\left( {Tx}\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{y}_{n}^{ * }\left( {Tx}\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left( {{T}^{ * }{y}_{n}^{ * }}\right) \left( x\right) = {z}^{ * }\left( x\right) .
\]
This shows that \( {y}^{ * } \in D\left( {T}^{ * }\right) \) with \( {T}^{ * }{y}^{ * } = {z}^{ * } \) .
Here is some more notation: We denote the range of \( T \) by \( R\left( T\right) \), and we denote the kernel of \( T \) (i.e., the nullspace) by \( Z\left( T\right) \) ,
\[
Z\left( T\right) = \{ x \in D\left( T\right) \mid {Tx} = 0\} .
\]
When \( X = Y \), it is of interest to consider the operators \( T - {\lambda I} \) where \( \lambda \in \mathbb{C} \) and \( I \) is the identity operator (here \( D\left( {T - {\lambda I}}\right) = D\left( T\right) \) ). The resolvent set \( \varrho \left( T\right) \) is defined as the set of \( \lambda \in \mathbb{C} \) for which \( T - {\lambda I} \) is a bijection of \( D\left( T\right) \) onto \( X \) with bounded inverse \( {\left( T - \lambda I\right) }^{-1} \) ; the spectrum \( \sigma \left( T\right) \) is defined as the complement \( \mathbb{C} \smallsetminus \varrho \left( T\right) .T - {\lambda I} \) is also written \( T - \lambda \) .
## 12.2 Unbounded operators in Hilbert spaces
We now consider the case where \( X \) and \( Y \) are complex Hilbert spaces. Here the norm on the dual space \( {X}^{ * } \) of \( X \) is a Hilbert space norm, and the Riesz representation theorem assures that for any element \( {x}^{ * } \in {X}^{ * } \) there is a unique element \( v \in X \) such that
\[
{x}^{ * }\left( x\right) = \left( {x, v}\right) \text{ for all }x \in X;
\]
and here \( {\begin{Vmatrix}{x}^{ * }\end{Vmatrix}}_{{X}^{ * }} = \parallel v{\parallel }_{X} \) . In fact, the mapping \( {x}^{ * } \mapsto v \) is a bijective isometry, and one usually identifies \( {X}^{ * } \) with \( X \) by this mapping.
With this identification, the adjoint operator \( {T}^{ * } \) of a densely defined operator \( T : X \rightarrow Y \) is defined as the operator from \( Y \) to \( X \) for which
\[
{\left( Tx, y\right) }_{Y} = {\left( x,{T}^{ * }y\right) }_{X}\text{ for all }x \in D\left( T\right) ,
\]
(12.10)
with \( D\left( {T}^{ * }\right) \) equal to the set of all \( y \in Y \) for which there exists a \( z \in X \) such that \( z \) can play the role of \( {T}^{ * }y \) in (12.10).
Observe in particular that \( y \in Z\left( {T}^{ * }\right) \) if and only if \( y \bot R\left( T\right) \), so we always have
\[
Y = \overline{R\left( T\right) } \oplus Z\left( {T}^{ * }\right)
\]
(12.11)
It is not hard to show that when \( S : X \rightarrow Y, T : X \rightarrow Y \) and \( R : Y \rightarrow Z \) are densely defined, with \( D\left( {S + T}\right) \) and \( D\left( {RT}\right) \) dense in \( X \), then
\[
{S}^{ * } + {T}^{ * } \subset {\left( S + T\right) }^{ * }\text{and}{T}^{ * }{R}^{ * } \subset {\left( RT\right) }^{ * }\text{;}
\]
(12.12)
these inclusions can be sharp (cf. Exercise 12.7). Note in particular that fo
|
Lemma 12.2. \( T : X \rightarrow Y \) is closed if and only if the following holds: When \( {\left( {x}_{n}\right) }_{n \in \mathbb{N}} \) is a sequence in \( D\left( T\right) \) with \( {x}_{n} \rightarrow x \) in \( X \) and \( T{x}_{n} \rightarrow y \) in \( Y \), then \( x \in D\left( T\right) \) with \( y = {Tx} \).
|
The closed graph theorem (recalled in Appendix B, Theorem B.16) implies that if \( T : X \rightarrow Y \) is closed and has \( D\left( T\right) = X \), then \( T \) is bounded. Thus for closed, densely defined operators, \( D\left( T\right) \neq X \) is equivalent with unboundedness.
|
Theorem 3.14 (Gluing distributions together). Let \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) be an arbitrary system of open sets in \( {\mathbb{R}}^{n} \) and let \( \Omega = \mathop{\bigcup }\limits_{{\lambda \in \Lambda }}{\omega }_{\lambda } \) . Assume that there is given a system of distributions \( {u}_{\lambda } \in {\mathcal{D}}^{\prime }\left( {\omega }_{\lambda }\right) \) with the property that \( {u}_{\lambda } \) equals \( {u}_{\mu } \) on \( {\omega }_{\lambda } \cap {\omega }_{\mu } \), for each pair of indices \( \lambda ,\mu \in \Lambda \) . Then there exists one and only one distribution \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) such that \( {\left. u\right| }_{{\omega }_{\lambda }} = {u}_{\lambda } \) for all \( \lambda \in \Lambda \) . Proof. Observe to begin with that there is at most one solution \( u \) . Namely, if \( u \) and \( v \) are solutions, then \( {\left. \left( u - v\right) \right| }_{{\omega }_{\lambda }} = 0 \) for all \( \lambda \) . This implies that \( u - v = 0 \), by Lemma 3.11.
We construct \( u \) as follows: Let \( {\left( {K}_{l}\right) }_{l \in \mathbb{N}} \) be a sequence of compact sets as in (2.4) and consider a fixed \( l \) . Since \( {K}_{l} \) is compact, it is covered by a finite subfamily \( {\left( {\Omega }_{j}\right) }_{j = 1,\ldots, N} \) of the sets \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) ; we denote \( {u}_{j} \) the associated distributions given in \( {\mathcal{D}}^{\prime }\left( {\Omega }_{j}\right) \), respectively. By Theorem 2.17 there is a partition of unity \( {\psi }_{1},\ldots ,{\psi }_{N} \) consisting of functions \( {\psi }_{j} \in {C}_{0}^{\infty }\left( {\Omega }_{j}\right) \) satisfying \( {\psi }_{1} + \cdots + {\psi }_{N} = 1 \) on \( {K}_{l} \) . For \( \varphi \in {C}_{{K}_{l}}^{\infty }\left( \Omega \right) \) we set
\[
\langle u,\varphi {\rangle }_{\Omega } = {\left\langle u,\mathop{\sum }\limits_{{j = 1}}^{N}{\psi }_{j}\varphi \right\rangle }_{\Omega } = \mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{j},{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j}}.
\]
(3.39)
In this way, we have given \( \langle u,\varphi \rangle \) a value which apparently depends on a lot of choices (of \( l \), of the subfamily \( {\left( {\Omega }_{j}\right) }_{j = 1,\ldots, N} \) and of the partition of unity \( \left. \left\{ {\psi }_{j}\right\} \right) \) . But if \( {\left( {\Omega }_{k}^{\prime }\right) }_{k = 1,\ldots, M} \) is another subfamily covering \( {K}_{l} \), and \( {\psi }_{1}^{\prime },\ldots ,{\psi }_{M}^{\prime } \) is an associated partition of unity, we have, with \( {u}_{k}^{\prime } \) denoting the distribution given on \( {\Omega }_{k}^{\prime } \) :
\[
\mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{j},{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j}} = \mathop{\sum }\limits_{{j = 1}}^{N}\mathop{\sum }\limits_{{k = 1}}^{M}{\left\langle {u}_{j},{\psi }_{k}^{\prime }{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j}} = \mathop{\sum }\limits_{{j = 1}}^{N}\mathop{\sum }\limits_{{k = 1}}^{M}{\left\langle {u}_{j},{\psi }_{k}^{\prime }{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j} \cap {\Omega }_{k}^{\prime }}
\]
\[
= \mathop{\sum }\limits_{{j = 1}}^{N}\mathop{\sum }\limits_{{k = 1}}^{M}{\left\langle {u}_{k}^{\prime },{\psi }_{k}^{\prime }{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{k}^{\prime }} = \mathop{\sum }\limits_{{k = 1}}^{M}{\left\langle {u}_{k}^{\prime },{\psi }_{k}^{\prime }\varphi \right\rangle }_{{\Omega }_{k}^{\prime }},
\]
since \( {u}_{j} = {u}_{k}^{\prime } \) on \( {\Omega }_{j} \cap {\Omega }_{k}^{\prime } \) . This shows that \( u \) has been defined for \( \varphi \in \) \( {C}_{{K}_{l}}^{\infty }\left( \Omega \right) \) independently of the choice of finite subcovering of \( {K}_{l} \) and associated partition of unity. If we use such a definition for each \( {K}_{l}, l = 1,2,\ldots \), we find moreover that these definitions are consistent with each other. Indeed, for both \( {K}_{l} \) and \( {K}_{l + 1} \) one can use one cover and partition of unity chosen for \( {K}_{l + 1} \) . (In a similar way one finds that \( u \) does not depend on the choice of the sequence \( {\left( {K}_{l}\right) }_{l \in \mathbb{N}} \) .) This defines \( u \) as an element of \( {\mathcal{D}}^{\prime }\left( \Omega \right) \) .
Now we check the consistency of \( u \) with each \( {u}_{\lambda } \) as follows: Let \( \lambda \in \Lambda \) . For each \( \varphi \in {C}_{0}^{\infty }\left( {\omega }_{\lambda }\right) \) there is an \( l \) such that \( \varphi \in {C}_{{K}_{l}}^{\infty }\left( \Omega \right) \) . Then \( \langle u,\varphi \rangle \) can be defined by (3.39). Here
\[
\langle u,\varphi {\rangle }_{\Omega } = {\left\langle u,\mathop{\sum }\limits_{{j = 1}}^{N}{\psi }_{j}\varphi \right\rangle }_{\Omega } = \mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{j},{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j}}
\]
\[
= \mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{j},{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j} \cap {\omega }_{\lambda }} = \mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{\lambda },{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j} \cap {\omega }_{\lambda }} = {\left\langle {u}_{\lambda },\varphi \right\rangle }_{{\omega }_{\lambda }},
\]
which shows that \( {\left. u\right| }_{{\omega }_{\lambda }} = {u}_{\lambda } \) .
In the French literature the procedure is called "recollement des morceaux" (gluing the pieces together).
A very simple example is the case where \( u \in {\mathcal{E}}^{\prime }\left( \Omega \right) \) is glued together with the 0 -distribution on a neighborhood of \( {\mathbb{R}}^{n} \smallsetminus \Omega \) . In other words, \( u \) is "extended by 0" to a distribution in \( {\mathcal{E}}^{\prime }\left( {\mathbb{R}}^{n}\right) \) . Such an extension is often tacitly understood.
## 3.4 Convolutions and coordinate changes
We here give two other useful applications of Theorem 3.8, namely, an extension to \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{n}\right) \) of the definition of convolutions with \( \varphi \), and a generalization of coordinate changes. First we consider convolutions:
When \( \varphi \) and \( \psi \) are in \( {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \), then \( \varphi * \psi \) (recall (2.26)) is in \( {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) \) and satisfies \( {\partial }^{\alpha }\left( {\varphi * \psi }\right) = \varphi * {\partial }^{\alpha }\psi \) for each \( \alpha \) . Note here that \( \varphi * \psi \left( x\right) \) is 0 except if \( x - y \in \operatorname{supp}\varphi \) for some \( y \in \operatorname{supp}\psi \) ; the latter means that \( x \in \operatorname{supp}\varphi + y \) for some \( y \in \operatorname{supp}\psi \), i.e., \( x \in \operatorname{supp}\varphi + \operatorname{supp}\psi \) . Thus
\[
\operatorname{supp}\varphi * \psi \subset \operatorname{supp}\varphi + \operatorname{supp}\psi
\]
(3.40)
The map \( \psi \mapsto \varphi * \psi \) is continuous, for if \( K \) is an arbitrary subset of \( \Omega \), then the application of \( \varphi * \) to \( {C}_{K}^{\infty }\left( {\mathbb{R}}^{n}\right) \) gives a continuous map into \( {C}_{K + \operatorname{supp}\varphi }^{\infty }\left( {\mathbb{R}}^{n}\right) \) , since one has for \( k \in {\mathbb{N}}_{0} \) :
\[
\sup \left\{ {\left| {{\partial }^{\alpha }\left( {\varphi * \psi }\right) \left( x\right) }\right| \mid x \in {\mathbb{R}}^{n},\left| \alpha \right| \leq k}\right\}
\]
\[
= \sup \left\{ {\left| {\varphi * {\partial }^{\alpha }\psi \left( x\right) }\right| \mid x \in {\mathbb{R}}^{n},\left| \alpha \right| \leq k}\right\}
\]
\[
\leq \parallel \varphi {\parallel }_{{L}_{1}} \cdot \sup \left\{ {\left| {{\partial }^{\alpha }\psi \left( x\right) }\right| \left| {x \in K,}\right| \alpha \mid \leq k}\right\} \;\text{ for }\psi \text{ in }{C}_{K}^{\infty }\left( {\mathbb{R}}^{n}\right) .
\]
Denote \( \varphi \left( {-x}\right) \) by \( \check{\varphi }\left( x\right) \) (the operator \( S : \varphi \left( x\right) \mapsto \varphi \left( {-x}\right) \) can be called the antipodal operator). One has for \( \varphi \) and \( \chi \) in \( {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right), u \in {L}_{1,\operatorname{loc}}\left( {\mathbb{R}}^{n}\right) \) that
\[
\langle \varphi * u,\chi \rangle = {\int }_{{\mathbb{R}}^{n}}\left( {\varphi * u}\right) \left( y\right) \chi \left( y\right) {dy} = {\int }_{{\mathbb{R}}^{n}}{\int }_{{\mathbb{R}}^{n}}\varphi \left( x\right) u\left( {y - x}\right) \chi \left( y\right) {dxdy}
\]
\[
= {\int }_{{\mathbb{R}}^{n}}u\left( x\right) \left( {\check{\varphi } * \chi }\right) \left( x\right) {dx} = \langle u,\check{\varphi } * \chi \rangle
\]
by the Fubini theorem. So we see that the adjoint \( {T}^{ \times } \) of \( T = \check{\varphi } * : \mathcal{D}\left( {\mathbb{R}}^{n}\right) \rightarrow \) \( \mathcal{D}\left( {\mathbb{R}}^{n}\right) \) acts like \( \varphi * \) on functions in \( {L}_{1,\operatorname{loc}}\left( {\mathbb{R}}^{n}\right) \) . Therefore we define the operator \( \varphi * \) on distributions as the adjoint of the operator \( \check{\varphi } * \) on test functions:
\[
\langle \varphi * u,\chi \rangle = \langle u,\check{\varphi } * \chi \rangle, u \in {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{n}\right) ,\varphi ,\chi \in {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right) ;
\]
(3.41)
this makes \( u \mapsto \varphi * u \) a continuous operator on \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{n}\right) \) by Theorem 3.8. The rule
\[
{\partial }^{\alpha }\left( {\varphi * u}\right) = \left( {{\partial }^{\alpha }\varphi }\right) * u = \varphi * \left( {{\partial }^{\alpha }u}\right) ,\text{ for }\varphi \in {C}_{0}^{\infty }\left( {\mathbb{R}}^{n}\right), u \in {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{n}\right) ,
\]
(3.42)
follows by use of the defining formulas and calculations on
|
Theorem 3.14 (Gluing distributions together). Let \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) be an arbitrary system of open sets in \( {\mathbb{R}}^{n} \) and let \( \Omega = \mathop{\bigcup }\limits_{{\lambda \in \Lambda }}{\omega }_{\lambda } \) . Assume that there is given a system of distributions \( {u}_{\lambda } \in {\mathcal{D}}^{\prime }\left( {\omega }_{\lambda }\right) \) with the property that \( {u}_{\lambda } \) equals \( {u}_{\mu } \) on \( {\omega }_{\lambda } \cap {\omega }_{\mu } \), for each pair of indices \( \lambda ,\mu \in \Lambda \) . Then there exists one and only one distribution \( u \in {\mathcal{D}}^{\prime }\left( \Omega \right) \) such that \( {\left. u\right| }_{{\omega }_{\lambda }} = {u}_{\lambda } \) for all \( \lambda \in \Lambda \) .
|
Observe to begin with that there is at most one solution \( u \) . Namely, if \( u \) and \( v \) are solutions, then \( {\left. \left( u - v\right) \right| }_{{\omega }_{\lambda }} = 0 \) for all \( \lambda \) . This implies that \( u - v = 0 \), by Lemma 3.11.
We construct \( u \) as follows: Let \( {\left( {K}_{l}\right) }_{l \in \mathbb{N}} \) be a sequence of compact sets as in (2.4) and consider a fixed \( l \) . Since \( {K}_{l} \) is compact, it is covered by a finite subfamily \( {\left( {\Omega }_{j}\right) }_{j = 1,\ldots, N} \) of the sets \( {\left( {\omega }_{\lambda }\right) }_{\lambda \in \Lambda } \) ; we denote \( {u}_{j} \) the associated distributions given in \( {\mathcal{D}}^{\prime }\left( {\Omega }_{j}\right) \), respectively. By Theorem 2.17 there is a partition of unity \( {\psi }_{1},\ldots ,{\psi }_{N} \) consisting of functions \( {\psi }_{j} \in {C}_{0}^{\infty }\left( {\Omega }_{j}\right) \) satisfying \( {\psi }_{1} + \cdots + {\psi }_{N} = 1 \) on \( {K}_{l} \) . For \( \varphi \in {C}_{{K}_{l}}^{\infty }\left( \Omega \right) \) we set
\[
\langle u,\varphi {\rangle }_{\Omega } = {\left\langle u,\mathop{\sum }\limits_{{j = 1}}^{N}{\psi }_{j}\varphi \right\rangle }_{\Omega } = \mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{j},{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j}}.
\]
In this way, we have given \( \langle u,\varphi \rangle \) a value which apparently depends on a lot of choices (of \( l \), of the subfamily \( {\left( {\Omega }_{j}\right) }_{j = 1,\ldots, N} \) and of the partition of unity \( \left. \left\{ {\psi }_{j}\right\} \right) \) . But if \( {\left( {\Omega }_{k}^{\prime }\right) }_{k = 1,\ldots, M} \) is another subfamily covering \( {K}_{l} \), and \( {\psi }_{1}^{\prime },\ldots ,{\psi }_{M}^{\prime } \) is an associated partition of unity, we have, with \( {u}_{k}^{\prime } \) denoting the distribution given on \( {\Omega }_{k}^{\prime } \) :
\[
\mathop{\sum }\limits_{{j = 1}}^{N}{\left\langle {u}_{j},{\psi }_{j}\varphi \right\rangle }_{{\Omega }_{j}} =
|
Lemma 1.3. Let \( {S}_{1} \) and \( {S}_{2} \) be any two projective subspaces of \( {\mathbb{P}}^{n}\left( k\right) \) . Then
\[
\operatorname{cod}\left( {{S}_{1} \cap {S}_{2}}\right) \leq \operatorname{cod}{S}_{1} + \operatorname{cod}{S}_{2}.
\]
Proof. Any subspace \( {k}^{r + 1} \) has codimension \( n - r \) in \( {k}^{n + 1} \) ; therefore the associated subspace \( {\mathbb{P}}^{r}\left( k\right) \) has the same codimension \( n - r \) in \( {\mathbb{P}}^{n}\left( k\right) \) . Then apply the corresponding vector space theorem.
Hence any two projective 2-spaces in \( {\mathbb{P}}^{3}\left( \mathbb{R}\right) \) intersect in at least a line, and so on. We show later that this dimension relation holds for any two algebraic varieties in \( {\mathbb{P}}^{n}\left( \mathbb{C}\right) \) (Section IV,3); we prove it for projective curves in \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) in Section II,6.
Another advantage of Definition 1.1 is that it allows us to define coordinates on \( {\mathbb{P}}^{n}\left( k\right) \) . This will be extremely useful later on.
Definition 1.4. Let \( P \) be a point of \( {\mathbb{P}}^{n}\left( k\right) \), and \( {L}_{P} \), the corresponding 1-subspace of \( {k}^{n + 1} \) . The \( \left( {n + 1}\right) \) -tuple of coordinates \( \left( {{a}_{1},\ldots ,{a}_{n + 1}}\right) \) of any nonzero point in \( {L}_{P} \) is called a coordinate set of \( P \) . More informally, we say that \( \left( {{a}_{1},\ldots ,{a}_{n + 1}}\right) \) are coordinates of \( P \) .
Remark 1.5. Coordinate sets of \( P \) are never uniquely determined, unless \( k \) is the two-element field; however, any two coordinate sets of \( P \) differ by a scalar multiple.
Definition 1.6. Two nonzero \( \left( {n + 1}\right) \) -tuples \( \left( {{a}_{1},\ldots ,{a}_{n + 1}}\right) \) and \( \left( {{b}_{1},\ldots ,{b}_{n + 1}}\right) \) of \( {k}^{n + 1} \) are equivalent if
\[
\left( {{b}_{1},\ldots ,{b}_{n + 1}}\right) = \left( {c{a}_{1},\ldots, c{a}_{n + 1}}\right)
\]
for some nonzero \( c \in k \) . (Hence all coordinate sets of any \( P \in {\mathbb{P}}^{n}\left( k\right) \) are equivalent.)
Now let us relate, in a more direct way, Definition 1.1 to our definition of \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) in Chapter I. The same kind of arguments we use now will enable us to see how the general definition reduces to those in Chapter I for the special cases considered there.
First, using Definition 1.1 and an \( \left( {{X}_{1},{X}_{2},{X}_{3}}\right) \) -coordinate system of \( {\mathbb{R}}^{3} \) , we see the points of \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) fall into two classes-those having zero as last coordinate, and those with nonzero last coordinate. If a triple \( \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \) satisfies \( {a}_{3} \neq 0 \), then dividing by \( {a}_{3} \) yields a triple \( \left( {{b}_{1},{b}_{2},1}\right) \) . All such points of \( {\mathbb{R}}^{3} \) constitute the 2-plane \( \mathbf{V}\left( {{X}_{3} - 1}\right) \), and this establishes a 1: 1 correspondence between a part of \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) and the plane \( \mathrm{V}\left( {{X}_{3} - 1}\right) \) (since two triples \( \left( {{b}_{1},{b}_{2},1}\right) \) and \( \left( {{b}_{1}^{\prime },{b}_{2}^{\prime },1}\right) \) are equivalent iff \( {b}_{1} = {b}_{1}^{\prime } \) and \( {b}_{2} = {b}_{2}^{\prime } \) .
Now if a point of \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) has last coordinate zero, say \( \left( {{b}_{1},{b}_{2},0}\right) \), then all scalar multiples of it form a line \( L \) in \( {\mathbb{R}}_{{X}_{1}{X}_{2}} \) . Then \( L + \left( {0,0,1}\right) \) is a line in our hyperplane through \( \left( {0,0,1}\right) \) . Hence the points with zero last coordinate may be identified in a natural way with the set of all lines through \( \left( {0,0,1}\right) \) within the plane \( \mathrm{V}\left( {{X}_{3} - 1}\right) \), while the points with nonzero last coordinates correspond to the points of the plane \( \mathrm{V}\left( {{X}_{3} - 1}\right) \) . Hence the set \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) , according to Definition 1.1, is in 1:1 onto correspondence with the points of \( {\mathbb{R}}^{2} \), together with all 1 -subspaces of \( {\mathbb{R}}^{2} \) . But this is precisely the way \( {\mathbb{R}}^{2} \) was completed in Chapter I-to \( {\mathbb{R}}^{2} \) we added one new element for each different 1-subspace of \( {\mathbb{R}}^{2} \) . It is straightforward to check that either definition yields the same open sets on \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) ; hence both definitions yield the same topological space. We can now apply precisely the same kind of reasoning to show that

Figure 2
Definition 1.1 reduces to the ones in Chapter I for the special cases there.
There is yet another advantage of Definition 1.1-it gives a very nice way of passing between the "affine" and the "projective." Let \( {\mathbb{P}}^{n}\left( k\right) \) be as above; any \( n \) -dimensional subspace \( W \) of \( {k}^{n + 1} \) then defines an \( \left( {n - 1}\right) \) -dimensional projective subspace \( {\mathbb{P}}^{n - 1}\left( k\right) \) of \( {\mathbb{P}}^{n}\left( k\right) \) . We may choose this subspace to play the role of "projective hyperplane at infinity," \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \subset {\mathbb{P}}^{n}\left( k\right) \) . What does the set of remaining points of \( {\mathbb{P}}^{n}\left( k\right) \) look like? If we parallel-translate the subspace \( W \) through a fixed vector \( {v}_{0} \) in \( {k}^{n + 1} \smallsetminus W \), obtaining
\[
{v}_{0} + W = \left\{ {{v}_{0} + w \mid w \in W}\right\}
\]
then each 1-subspace in \( {k}^{n + 1} \smallsetminus W \) meets \( {v}_{0} + W \) in exactly one point. This sets up a 1:1 onto correspondence between the points of \( {\mathbb{P}}^{n}\left( k\right) \smallsetminus {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) and the points of \( {k}^{n} \) ; Figure 2 indicates a typical situation for \( k = \mathbb{R}, n = 2 \) .
Definition 1.7. We call the set \( {\mathbb{P}}^{n}\left( k\right) \smallsetminus {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) the affine part of \( {\mathbb{P}}^{n}\left( k\right) \) relative to the hyperplane at infinity \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) .
Any affine \( n \) -space may be regarded as the affine part of a \( {\mathbb{P}}^{n}\left( k\right) \) relative to some \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \) by taking a parallel translate of an \( n \) -dimensional subspace \( W \) of \( {k}^{n + 1} \), and identifying each point \( P \) of this parallel translate with the 1- subspace of \( {k}^{n + 1} \) through \( P \) .
There are \( n + 1 \) particularly simple choices of \( {\mathbb{P}}_{\infty }{}^{n - 1}\left( k\right) \), namely the projective hyperplanes defined by each of the \( n + 1 \) hyperplanes \( {X}_{i} = 0 \) where \( i = 1,\ldots, n + 1 \) . The following important observation will be particularly useful in the sequel:
The corresponding \( n + 1 \) affine parts of \( {\mathbb{P}}^{n}\left( k\right) \) completely cover \( {\mathbb{P}}^{n}\left( k\right) \) .
This is true since the affine part corresponding to \( {X}_{1} = 0 \) covers all of \( {\mathbb{P}}^{n}\left( k\right) \) except those points represented by the 1 -subspaces contained in \( {X}_{1} = 0 \) ; the union of the affine parts corresponding to \( {X}_{1} = 0 \) and \( {X}_{2} = 0 \) then covers all of \( {\mathbb{P}}^{n}\left( k\right) \) except those points represented by the 1-subspaces in the intersection of \( {X}_{1} = 0 \) and \( {X}_{2} = 0 \) ; and so on. Clearly there are no 1 -subspaces in \( {k}^{n + 1} \) common to \( {X}_{1} = {X}_{2} = \ldots = {X}_{n + 1} = 0 \), so the union of all these \( n + 1 \) affine parts covers all of \( {\mathbb{P}}^{n}\left( k\right) \) .
We have seen how an arbitrary \( \left( {n - 1}\right) \) -dimensional subspace can be the hyperplane at infinity of \( {\mathbb{P}}^{n}\left( k\right) \) . It is fair to ask why one would want to do this in the first place. Why not just stick to one standard affine \( n \) -space, letting its points be the finite points, and the added points always be the points at infinity? One answer is this : There is often much important geometry going on "at infinity," and many times one needs to know more precisely what is happening there. It is helpful in this to be able to "move" the line at infinity so that the infinite points become finite; these points then become points in an ordinary affine variety, where methods developed for affine varieties can be applied.
Example 1.8. Consider the cubic curve \( \mathbf{V}\left( {Z - {X}^{3}}\right) \) in \( {\mathbb{P}}^{2}\left( \mathbb{C}\right) \) . It is of degree 3, so by Bézout's theorem any projective line must intersect this projective curve in 3 points, counted with multiplicity. Now it happens that if the projective line contains a line in the real part of \( {\mathbb{C}}_{XZ} \), then all three of these points are "real"-that is, they are all in the projective completion \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) of the real part of \( {\mathbb{C}}_{XZ} \) . Figure 8e shows the part of this curve in \( {\mathbb{P}}^{2}\left( \mathbb{R}\right) \) . Since \( Z \) increases much faster than \( X \) for large \( X \), the branches approach the \( Z \) -axis, and both meet at the infinite point of the \( Z \) -axis. The completed \( Z \) -axis should intersect the cubic in 3 points. The origin is clearly one point, the point at infinity, another. So far we have two points. But let's look more closely at what happens at infinity-in fact, let us try to get an explicit equation describing the curve near this infinite point.
First, represent the points of \( \mathrm{V}\left( {Z - {X}^{3}}\r
|
Lemma 1.3. Let \( {S}_{1} \) and \( {S}_{2} \) be any two projective subspaces of \( {\mathbb{P}}^{n}\left( k\right) \) . Then
\[
\operatorname{cod}\left( {{S}_{1} \cap {S}_{2}}\right) \leq \operatorname{cod}{S}_{1} + \operatorname{cod}{S}_{2}.
\]
|
Any subspace \( {k}^{r + 1} \) has codimension \( n - r \) in \( {k}^{n + 1} \) ; therefore the associated subspace \( {\mathbb{P}}^{r}\left( k\right) \) has the same codimension \( n - r \) in \( {\mathbb{P}}^{n}\left( k\right) \) . Then apply the corresponding vector space theorem.
|
Theorem 2.1.4 (Greene-Krantz [GRK2]). Let \( B \subseteq {\mathbb{C}}^{n} \) be the unit ball. Let \( {\rho }_{0}\left( z\right) = {\left| z\right| }^{2} - 1 \) be the usual defining function for \( B \) . If \( \epsilon > 0 \) is sufficiently small, \( k = k\left( n\right) \) is sufficiently large, and \( \Omega \in {\mathcal{U}}_{\epsilon }^{k}\left( B\right) \) then either
\[
\Omega \sim B
\]
(2.1.4.1)
or
\( \Omega \) is not biholomorphic to the ball and
(2.1.4.2)
(a) Aut \( \left( \Omega \right) \) is compact.
(b) Aut \( \left( \Omega \right) \) has a fixed point. Moreover,
If \( K \subset \subset B,\epsilon > 0 \) is sufficiently small (depending on \( K \) ), and \( \Omega \in {\mathcal{U}}_{\epsilon }^{k}\left( B\right) \) has the property that its fixed point set lies in \( K \), then there is a biholomorphic mapping \( \Phi : \Omega \rightarrow \Phi \left( \Omega \right) \equiv {\Omega }^{\prime } \subseteq {\mathbb{C}}^{n} \) such that \( \operatorname{Aut}\left( {\Omega }^{\prime }\right) \) is the restriction to \( {\Omega }^{\prime } \) of a subgroup of the group of unitary matrices.
## The collection of domains to which (2.1.4.2) applies is both dense and open.
Theorem 2.1.4 shows, in a weak sense, that domains near the ball that have any automorphisms other than the identity are (biholomorphic to) domains with only Euclidean automorphisms. It should be noted that (2.1.4.2a) is already contained in the theorem of Bun Wong and Rosay [WON, ROS] and that the denseness of the domains to which (2.1.4.2) applies is contained in the work of Burns-Shnider-Wells. The proof of Theorem 2.1.4 involves a detailed analysis of Fefferman's asymptotic expansion for the Bergman kernel and of the \( \bar{\partial } \) -Neumann problem and would double the length of this book if we were to treat it in any detail.
The purpose of this lengthy introduction has been to establish the importance of Theorem 2.1.4 and to set the stage for what follows. It may be noted that the proof of the result analogous to Fefferman’s in \( {\mathbb{C}}^{1} \), that a biholomorphic mappings of smooth domains extends smoothly to the boundary, was proved in the nineteenth century by Painlevé [PAI]. The result in one complex dimension has been highly refined, beginning with work of Kellogg [KEL] and more recently by Warschawski [WAR1, WAR2, WAR3], Rodin and Warschawski [ROW], and others. This classical work uses harmonic estimation, potential theory, and the Jordan curve theorem, devices which have no direct analogue in higher dimensions. A short, self-contained, proof of the one-variable result-using ideas closely related to those presented here-appears in [BEK].
We conclude this section by presenting a short and elegant proof of the Feffer-man's Theorem 2.1.3. The techniques are due to Bell [BEL1] and Bell and Ligocka [BELL]. The proof uses an important and nontrivial fact (known as "Condition \( R \) " of Bell and Ligocka) about the \( \bar{\partial } \) -Neumann problem. We will actually prove Condition \( R \) for a strictly pseudoconvex domain in Theorem 4.4.5. (Condition \( R \) , and more generally the solution of the \( \bar{\partial } \) -Neumann problem, is considered in detail in the book Krantz [KRA4].)
Let \( \Omega \subset \subset {\mathbb{C}}^{n} \) be a domain with \( {C}^{\infty } \) boundary. We define
Condition \( R \) (Bell [BEL1]) Define an operator on \( {L}^{2}\left( \Omega \right) \) by
\[
{Pf}\left( z\right) = {\int }_{\Omega }K\left( {z,\zeta }\right) f\left( \zeta \right) \mathrm{d}V\left( \zeta \right)
\]
where \( K\left( {z,\zeta }\right) \) is the Bergman kernel for \( \Omega \) . This is the Bergman projection. Then, for each \( j > 0 \), there is an \( m = m\left( j\right) > 0 \) such that \( P \) satisfies the estimates
\[
\parallel {Pf}{\parallel }_{{W}^{j}\left( \Omega \right) } \leq {C}_{j}\parallel f{\parallel }_{{W}^{m}\left( \Omega \right) }
\]
for all testing functions \( f \) .
Using a little Sobolev theory (see [KRA4]), one can easily see that this formulation of Condition \( R \) is equivalent to the condition that the Bergman kernel map \( {C}^{\infty }\left( \bar{\Omega }\right) \) to \( {C}^{\infty }\left( \bar{\Omega }\right) \) .
The deep fact, which we shall prove in Sect. 4.4, is that Condition R holds on any strictly pseudoconvex domain.
In fact we can and should say what is the key idea in establishing this last assertion. Let \( P : {L}^{2}\left( \Omega \right) \rightarrow {L}^{2}\left( \Omega \right) \) . The operator
\[
\bar{\partial } : {\bigwedge }^{0, j} \rightarrow {\bigwedge }^{0, j + 1}
\]
(2.1.5)
is the usual exterior differential operator of complex analysis. One may show that the second-order, elliptic partial differential operator \( ▱ = \bar{\partial }{\bar{\partial }}^{ * } + {\bar{\partial }}^{ * }\bar{\partial } \) has a canonical right inverse called \( N \) . This is the \( \bar{\partial } \) -Neumann operator. These operators are treated in detail in [FOK] and [KRA4]. Then it is a straightforward exercise in Hilbert space theory to verify that
\[
P = I - {\bar{\partial }}^{ * }N\bar{\partial }
\]
where \( P \) is the Bergman projection. Now the references [FOK] and [KRA4] prove in detail that \( N \) maps \( {W}^{s} \) (the Sobolev space of order \( s \) ) to \( {W}^{s + 1} \) for every \( s \) . It follows from this and formula (2.1.5) that \( P \) maps \( {W}^{s} \) to \( {W}^{s - 1} \) . That is enough to verify Condition \( R \) .
We remark in passing that, in general, it does not matter whether \( m\left( j\right) \) is much larger than \( j \) or whether the \( m\left( j\right) \) in Condition \( R \) depends polynomially on \( j \) or exponentially on \( j \) . It so happens that, for a strictly pseudoconvex domain, we may take \( m\left( j\right) = j \) . This assertion is proved in [KRA4] in detail. On the other hand, Barrett [BAR1] has shown that, on the Diederich-Fornaess worm domain [DIF1], we must take \( m\left( j\right) > j \) . Later on, Christ [CHR1] showed that Condition \( R \) fails altogether on the worm.
Now we build a sequence of lemmas leading to the Fefferman's theorem. First we record some notation.
We let \( {W}^{j}\left( \Omega \right) \) be the usual Sobolev space. See [KRA4] for this idea.
If \( \Omega \subset \subset {\mathbb{C}}^{n} \) is any smoothly bounded domain and if \( j \in \mathbb{N} \), we let
\( W{H}^{j}\left( \Omega \right) = {W}^{j}\left( \Omega \right) \cap \{ \) holomorphic functions on \( \Omega \} \) ,
\[
W{H}^{\infty }\left( \Omega \right) = \mathop{\bigcap }\limits_{{j = 1}}^{\infty }W{H}^{j}\left( \Omega \right) = {C}^{\infty }\left( \bar{\Omega }\right) \cap \{ \text{ holomorphic functions on }\Omega \} .
\]
Here \( {W}^{j} \) is the standard Sobolev space on a domain (for which see [KRA4, ADA]). Let \( {W}_{0}^{j}\left( \Omega \right) \) be the \( {W}^{j} \) closure of \( {C}_{c}^{\infty }\left( \Omega \right) \) . [Exercise: if \( j \) is sufficiently large, then the Sobolev embedding theorem implies trivially that \( {W}_{0}^{j}\left( \Omega \right) \) is a proper subset of \( {W}^{j}\left( \Omega \right) {.}^{1}\rbrack \)
Let us say that \( u, v \in {C}^{\infty }\left( \bar{\Omega }\right) \) agree up to order \( k \) on \( \partial \Omega \) if
\[
{\left. {\left( \frac{\partial }{\partial z}\right) }^{\alpha }{\left( \frac{\partial }{\partial \bar{z}}\right) }^{\beta }\left( u - v\right) \right| }_{\partial \Omega } = 0\;\forall \alpha ,\beta \;\text{ with }\;\left| \alpha \right| + \left| \beta \right| \leq k.
\]
Lemma 2.1.6. Let \( \Omega \subset {\mathbb{C}}^{n} \) be smoothly bounded and strictly pseudoconvex. Let \( w \in \Omega \) be fixed. Let \( K \) denote the Bergman kernel. There is a constant \( {C}_{w} > 0 \) such that
\[
\parallel K\left( {w, \cdot }\right) {\parallel }_{\text{sup }} \leq {C}_{w}
\]
Proof. The function \( K\left( {z, \cdot }\right) \) is harmonic. Let \( \phi : \Omega \rightarrow \mathbb{R} \) be a radial, \( {C}_{c}^{\infty } \) function centered at \( w \) . Assume that \( \phi \geq 0 \) and \( \int \phi \left( \zeta \right) \mathrm{d}V\left( \zeta \right) = 1 \) . Then the mean value property implies that
\[
K\left( {z, w}\right) = {\int }_{\Omega }K\left( {z,\zeta }\right) \phi \left( \zeta \right) \mathrm{d}V\left( \zeta \right)
\]
But the last expression equals \( {P\phi }\left( z\right) \) . Therefore
\[
\parallel K\left( {w, \cdot }\right) {\parallel }_{\text{sup }} = \mathop{\sup }\limits_{{z \in \Omega }}\left| {K\left( {w, z}\right) }\right|
\]
\[
= \mathop{\sup }\limits_{{z \in \Omega }}\left| {K\left( {z, w}\right) }\right|
\]
\[
= \mathop{\sup }\limits_{{z \in \Omega }}\left| {{P\phi }\left( z\right) }\right|
\]
By Sobolev's Theorem, this is
\[
\leq C\left( \Omega \right) \cdot \parallel {P\phi }{\parallel }_{W{H}^{{2n} + 1}}.
\]
By Condition \( R \), this is
\[
\leq C\left( \Omega \right) \cdot \parallel \phi {\parallel }_{{W}^{m\left( {{2n} + 1}\right) }} \equiv {C}_{w}.
\]
\( {}^{1} \) For the readers’s convenience, we recall here that the Sobolev embedding theorem says that if a function on \( {\mathbb{R}}^{N} \) has more than \( N/2 \) derivatives in \( {L}^{2} \), then in fact it has a continuous derivative. See [STE1], for instance, for the details.
Lemma 2.1.7. Let \( u \in {C}^{\infty }\left( \bar{\Omega }\right) \) be arbitrary. Let \( s \in \{ 0,1,2,\ldots \} \) . Then there is a \( v \in {C}^{\infty }\left( \bar{\Omega }\right) \) such that \( {Pv} = 0 \) and the functions \( u \) and \( v \) agree to order \( s \) on \( \partial \Omega \) .
Proof. After a partition of unity, it suffices to prove the assertion in a small neighborhood \( U \) of \( {z}_{0} \in \partial \Omega \) . After a rotation, we may suppose that \( \partial \rho /\partial {z}_{1} \neq 0 \) on \( U \cap \bar{\Omega } \), where \( \rho \) is a defining function for \( \Omega \) . Define the differential operator
\[
v = \frac{\operato
|
Theorem 2.1.4 (Greene-Krantz [GRK2]). Let \( B \subseteq {\mathbb{C}}^{n} \) be the unit ball. Let \( {\rho }_{0}\left( z\right) = {\left| z\right| }^{2} - 1 \) be the usual defining function for \( B \) . If \( \epsilon > 0 \) is sufficiently small, \( k = k\left( n\right) \) is sufficiently large, and \( \Omega \in {\mathcal{U}}_{\epsilon }^{k}\left( B\right) \) then either
\[
\Omega \sim B
\]
or
\( \Omega \) is not biholomorphic to the ball and
(a) Aut \( \left( \Omega \right) \) is compact.
(b) Aut \( \left( \Omega \right) \) has a fixed point. Moreover,
If \( K \subset \subset B,\epsilon > 0 \) is sufficiently small (depending on \( K \) ), and \( \Omega \in {\mathcal{U}}_{\epsilon }^{k}\left( B\right) \) has the property that its fixed point set lies in \( K \), then there is a biholomorphic mapping \( \Phi : \Omega \rightarrow \Phi \left( \Omega \right) \equiv {\Omega }^{\prime } \subseteq {\mathbb{C}}^{n} \) such that \( \operatorname{Aut}\left( {\Omega }^{\prime }\right) \) is the restriction to \( {\Omega }^{\prime } \) of a subgroup of the group of unitary matrices.
|
null
|
Theorem 5.3. Let \( \mathrm{S} \) be an extension ring of \( \mathrm{R} \) and \( \mathrm{s} \in \mathrm{S} \) . Then the following conditions are equivalent.
(i) \( \mathrm{s} \) is integral over \( \mathrm{R} \) ;
(ii) \( \mathrm{R}\left\lbrack \mathrm{s}\right\rbrack \) is a finitely generated \( \mathrm{R} \) -module;
(iii) there is a subring \( \mathrm{T} \) of \( \mathrm{S} \) containing \( {1}_{\mathrm{S}} \) and \( \mathrm{R}\left\lbrack \mathrm{s}\right\rbrack \) which is finitely generated as an R-module;
(iv) there is an \( \mathrm{R}\left\lbrack \mathrm{s}\right\rbrack \) -submodule \( \mathrm{B} \) of \( \mathrm{S} \) which is finitely generated as an \( \mathrm{R} \) -module and whose annihilator in \( \mathbf{R}\left\lbrack \mathrm{s}\right\rbrack \) is zero.
SKETCH OF PROOF. (i) \( \Rightarrow \) (ii) Suppose \( s \) is a root of the monic polynomial \( {f\varepsilon R}\left\lbrack x\right\rbrack \) of degree \( n \) . We claim that \( {1}_{R} = {s}^{0}, s,{s}^{2},\ldots ,{s}^{n - 1} \) generate \( R\left\lbrack s\right\rbrack \) as an \( R \) -module. As observed above, every element of \( R\left\lbrack s\right\rbrack \) is of the form \( g\left( s\right) \) for some \( g \in R\left\lbrack x\right\rbrack \) . By the Division Algorithm III.6.2 \( g\left( x\right) = f\left( x\right) q\left( x\right) + r\left( x\right) \) with \( \deg r < \deg f \) . Therefore in \( S, g\left( s\right) = f\left( s\right) q\left( s\right) + r\left( s\right) = 0 + r\left( s\right) = r\left( s\right) \) . Hence \( g\left( s\right) \) is an \( R \) -linear combination of \( {1}_{R}, s,{s}^{2},\ldots ,{s}^{m} \) with \( m = \deg r < \deg f = n \) .
(ii) \( \Rightarrow \) (iii) Let \( T = R\left\lbrack s\right\rbrack \) .
(iii) \( \Rightarrow \) (iv) Let \( B \) be the subring \( T \) . Since \( R \subset R\left\lbrack s\right\rbrack \subset T, B \) is an \( R\left\lbrack s\right\rbrack \) -module that is finitely generated as an \( R \) -module by (iii). Since \( {1}_{S} \in B,{uB} = 0 \) for any \( {u\varepsilon S} \) implies \( u = {u1s} = 0 \) ; that is, the annihilator of \( B \) in \( R\left\lbrack s\right\rbrack \) is 0 .
(iv) \( \Rightarrow \) (i) Let \( B \) be generated over \( R \) by \( {b}_{1},\ldots ,{b}_{n} \) . Since \( B \) is an \( R\left\lbrack s\right\rbrack \) -module \( s{b}_{i} \in B \) for each \( i \) . Therefore there exist \( {r}_{ij} \in R \) such that
\[
s{b}_{1} = {r}_{11}{b}_{1} + {r}_{12}{b}_{2} + \cdots + {r}_{1n}{b}_{n}
\]
\[
s{b}_{2} = {r}_{21}{b}_{1} + {r}_{22}{b}_{2} + \cdots + {r}_{2n}{b}_{n}
\]
\[
s{b}_{n} = {r}_{n1}{b}_{1} + {r}_{n2}{b}_{2} + \cdots + {r}_{nn}{b}_{n}.
\]
Consequently,
\[
\left( {{r}_{11} - s}\right) {b}_{1} + {r}_{12},{b}_{2} + \cdots + {r}_{1n}{b}_{n} = 0
\]
\[
{r}_{21}{b}_{1} + \left( {{r}_{22} - s}\right) {b}_{2} + \cdots + {r}_{2n}{b}_{n} = 0
\]
\[
\text{.}
\]
\[
\text{.}
\]
\[
{r}_{n1}{b}_{1} + {r}_{n2},{b}_{2} + \cdots + \left( {{r}_{nn} - s}\right) {b}_{n} = 0.
\]
Let \( M \) be the \( n \times n \) matrix \( \left( {r}_{ij}\right) \) and let \( d \in R\left\lbrack s\right\rbrack \) be the determinant of the matrix \( M - s{I}_{n} \) . Then \( d{b}_{i} = 0 \) for all \( i \) by Exercise VII.3.8. Since \( B \) is generated by the \( {b}_{i},{dB} = 0 \) . Since the annihilator of \( B \) in \( R\left\lbrack s\right\rbrack \) is zero by (iv) we must have \( d = 0 \) . If \( f \) is the polynomial \( \left| {M - x{I}_{n}}\right| \) in \( R\left\lbrack x\right\rbrack \), then one of \( f, - f \) is monic and
\[
\pm f\left( s\right) = \pm \left| {M - s{I}_{n}}\right| = \pm d = 0.
\]
Therefore \( s \) is integral over \( R \) .
Corollary 5.4. If \( \mathrm{S} \) is a ring extension of \( \mathrm{R} \) and \( \mathrm{S} \) is finitely generated as an \( \mathrm{R} \) -module, then \( \mathrm{S} \) is an integral extension of \( \mathrm{R} \) .
PROOF. For any \( s \in S \) let \( S = T \) in part (iii) of Theorem 5.3. Then \( s \) is integral over \( R \) by Theorem 5.3(i). -
The proofs of the next propositions depend on the following fact. If \( R \subset S \subset T \) are rings (with \( {1}_{T} \in R \) ) such that \( T \) is a finitely generated \( S \) -module and \( S \) is a finitely generated \( R \) -module, then \( T \) is a finitely generated \( R \) -module. The second paragraph of the proof of Theorem IV.2.16 contains a proof of this fact, mutatis mutandis.
Theorem 5.5. If \( \mathrm{S} \) is an extension ring of \( \mathrm{R} \) and \( {\mathrm{s}}_{1},\ldots ,{\mathrm{s}}_{\mathrm{t}} \in \mathrm{S} \) are integral over \( \mathrm{R} \) , then \( \mathrm{R}\left\lbrack {{s}_{1},\ldots ,{\mathrm{s}}_{\mathrm{t}}}\right\rbrack \) is a finitely generated \( \mathrm{R} \) -module and an integral extension ring of \( \mathrm{R} \) .
PROOF. We have a tower of extension rings:
\[
R \subset R\left\lbrack {s}_{1}\right\rbrack \subset R\left\lbrack {{s}_{1},{s}_{2}}\right\rbrack \subset \cdots \subset R\left\lbrack {{s}_{1},\ldots ,{s}_{t}}\right\rbrack .
\]
For each \( i,{s}_{i} \) is integral over \( R \) and hence integral over \( R\left\lbrack {{s}_{1},\ldots ,{s}_{i - 1}}\right\rbrack \) . Since \( R\left\lbrack {{s}_{1},\ldots ,{s}_{i}}\right\rbrack = R\left\lbrack {{s}_{1},\ldots ,{s}_{i - 1}}\right\rbrack \left\lbrack {s}_{i}\right\rbrack, R\left\lbrack {{s}_{1},\ldots ,{s}_{i}}\right\rbrack \) is a finitely generated module over \( R\left\lbrack {{s}_{1},\ldots ,{s}_{i - 1}}\right\rbrack \) by Theorem 5.3 (i),(ii). Repeated application of the remarks preceding the theorem shows that \( R\left\lbrack {{s}_{1},\ldots ,{s}_{n}}\right\rbrack \) is a finitely generated \( R \) -module. Therefore, \( R\left\lbrack {{s}_{1},\ldots ,{s}_{n}}\right\rbrack \) is an integral extension ring of \( R \) by Corollary 5.4.
Theorem 5.6. If \( \mathrm{T} \) is an integral extension ring of \( \mathrm{S} \) and \( \mathrm{S} \) is an integral extension ring of \( \mathrm{R} \), then \( \mathrm{T} \) is an integral extension ring of \( \mathrm{R} \) .
PROOF. \( T \) is obviously an extension ring of \( R \) . If \( t \in T \), then \( t \) is integral over \( S \) and therefore the root of some monic polynomial \( {f\varepsilon S}\left\lbrack x\right\rbrack \), say \( f = \mathop{\sum }\limits_{{i = 0}}^{n}{s}_{i}{x}^{i} \) . Since \( f \) is also a polynomial over the ring \( R\left\lbrack {{s}_{0},{s}_{1},\ldots ,{s}_{n - 1}}\right\rbrack, t \) is integral over \( R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}}\right\rbrack \) . By Theorem 5.3 \( R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}}\right\rbrack \left\lbrack t\right\rbrack \) is a finitely generated \( R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}}\right\rbrack \) -module. But since \( S \) is integral over \( R, R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}}\right\rbrack \) is a finitely generated \( R \) -module by Theorem 5.5. The remarks preceding Theorem 5.5 show that
\[
R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}}\right\rbrack \left\lbrack t\right\rbrack = R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}, t}\right\rbrack
\]
is a finitely generated \( R \) -module. Since \( R\left\lbrack t\right\rbrack \subset R\left\lbrack {{s}_{0},\ldots ,{s}_{n - 1}, t}\right\rbrack, t \) is integral over \( R \) by Theorem 5.3(iii).
Theorem 5.7. Let \( \mathrm{S} \) be an extension ring of \( \mathrm{R} \) and let \( \widehat{\mathrm{R}} \) be the set of all elements of \( \mathrm{S} \) that are integral over \( \mathrm{R} \) . Then \( \widehat{\mathrm{R}} \) is an integral extension ring of \( \mathrm{R} \) which contains every subring of \( \mathrm{S} \) that is integral over \( \mathrm{R} \) .
PROOF. If \( s,{t\varepsilon }\widehat{R} \), then \( s,{t\varepsilon R}\left\lbrack {s, t}\right\rbrack \), whence \( t - {s\varepsilon R}\left\lbrack {s, t}\right\rbrack \) and \( t\bar{s}{\varepsilon R}\left\lbrack {s, t}\right\rbrack \) . Since \( s \) and \( t \) are integral over \( R \), so is the ring \( R\left\lbrack {s, t}\right\rbrack \) (Theorem 5.5). Therefore \( t - {s\varepsilon }\widehat{R} \) and ts \( \varepsilon \widehat{R} \) . Consequently, \( \widehat{R} \) is a subring of \( S \) (see Theorem 1.2.5). \( \widehat{R} \) contains \( R \) since every element of \( R \) is trivially integral over \( R \) . The definition of \( \widehat{R} \) insures that \( \widehat{R} \) is integral over \( R \) and contains all subrings of \( S \) that are integral over \( R \) .
If \( S \) is an extension ring of \( R \), then the ring \( \widehat{R} \) of Theorem 5.7 is called the integral closure of \( R \) in \( S \) . If \( \widehat{R} = R \), then \( R \) is said to be integrally closed in \( \mathbf{S} \) .
REMARKS. (i) Since \( {1}_{R}{\varepsilon R} \subset \widehat{R}, S \) is an extension ring of \( \widehat{R} \) . Theorems 5.6 and 5.7 imply that \( \widehat{R} \) is itself integrally closed in \( S \) . (ii) The concepts of integral closure and integrally closed rings are relative notions and refer to a given ring \( R \) and a particular extension ring \( S \) . Thus the phrase " \( R \) is integrally closed" is ambiguous unless an extension ring \( S \) is specified. There is one case, however, in which the ring \( S \) is understood without specific mention. An integral domain \( R \) is said to be integrally closed provided \( R \) is integrally closed in its quotient field (see p. 144).
EXAMPLE. The integral domain \( \mathbf{Z} \) is integrally closed (in the rational field \( \mathbf{Q} \) ; Exercise 8). However, \( \mathbf{Z} \) is not integrally closed in the field \( \mathbf{C} \) of complex numbers since \( i \in \mathbf{C} \) is integral over \( \mathbf{Z} \) .
EXAMPLE. More generally, every unique factorization domain is integrally closed (Exercise 8). In particular, the polynomial ring \( F\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \left( {F\text{a field}}\right) \) is integrally closed in its quotient field \( F\left( {{x}_{1},\ldots ,{x}_{n}}\right) \) .
The follo
|
Theorem 5.3. Let \( \mathrm{S} \) be an extension ring of \( \mathrm{R} \) and \( \mathrm{s} \in \mathrm{S} \) . Then the following conditions are equivalent.
(i) \( \mathrm{s} \) is integral over \( \mathrm{R} \) ;
(ii) \( \mathrm{R}\left\lbrack \mathrm{s}\right\rbrack \) is a finitely generated \( \mathrm{R} \) -module;
(iii) there is a subring \( \mathrm{T} \) of \( \mathrm{S} \) containing \( {1}_{\mathrm{S}} \) and \( \mathrm{R}\left\lbrack \mathrm{s}\right\rbrack \) which is finitely generated as an R-module;
(iv) there is an \( \mathrm{R}\left\lbrack \mathrm{s}\right\rbrack \) -submodule \( \mathrm{B} \) of \( \mathrm{S} \) which is finitely generated as an \( \mathrm{R} \) -module and whose annihilator in \( \mathbf{R}\left\lbrack \mathrm{s}\right\rbrack \) is zero.
|
(i) \( \Rightarrow \) (ii) Suppose \( s \) is a root of the monic polynomial \( {f\varepsilon R}\left\lbrack x\right\rbrack \) of degree \( n \) . We claim that \( {1}_{R} = {s}^{0}, s,{s}^{2},\ldots ,{s}^{n - 1} \) generate \( R\left\lbrack s\right\rbrack \) as an \( R \) -module. As observed above, every element of \( R\left\lbrack s\right\rbrack \) is of the form \( g\left( s\right) \) for some \( g \in R\left\lbrack x\right\rbrack \) . By the Division Algorithm III.6.2 \( g\left( x\right) = f\left( x\right) q\left( x\right) + r\left( x\right) \) with \( \deg r < \deg f \) . Therefore in \( S, g\left( s\right) = f\left( s\right) q\left( s\right) + r\left( s\right) = 0 + r\left( s\right) = r\left( s\right) \) . Hence \( g\left( s\right) \) is an \( R \) -linear combination of \( {1}_{R}, s,{s}^{2},\ldots ,{s}^{m} \) with \( m = \deg r < \deg f = n \) .
(ii) \( \Rightarrow \) (iii) Let \( T = R[s] \). Since every element in T can be written as a polynomial in s with coefficients in R, and since T contains both 1 and R[s], it follows that T satisfies the conditions of (iii).
(iii) \(\Rightarrow\) (iv): Let B be the subring T. Since R \(\subset\) R[s] \(\subset\) T, B is an R[s]-module that is finitely generated as an R-module by (iii). Since 1\(_{S}\) \(\in\) B, uB = 0 for any u \(\in\) S implies u = u1\(_{S}\) = 0; that is, the annihilator of B in R[s] is 0.
(iv)\(\Rightarrow\) (i): Let B be generated over R by b\(_{1}\),...,b\(_{n}\). Since B is an R[s]-module, sb\(_{i}\) \(\in\) B for each i. Therefore there exist r\(_{ij}\) \(\in\) R such that:
\[ sb_{1}=r_{11}b_{1}+r_{12}b_{2}+\cdots+r_{1n}b_{n}\]
\[ sb_{2}=r_{21}b_{1}+r_{22}b_{2}+\cdots+r_{2n}b_{n}\]
\[ sb_{n}=r_{n1}b_{1}+r_{n2}b_{2}+\cdots+r_{nn}b_{n}.\]
Consequently, we have:
\[ (r_{11}-s)b_1+r_ {12 } b_ {2 }+\cdots+ r_ { 1 n } b_ { n }=0\]
\[ r_ { 2 1 } b_ { 1 }+( r_ { 2 2 }- s ) b_ { 2 }+\cdots+ r_ { 2 n } b_ { n }=0\]
\[ r_ { n 1 } b_ { 1 }+ r_ { n 2 }, b_ { 2 }+\cdots+( r_ { n n }- s ) b_ { n }=0.\]
|
Lemma 11.3.11. For all \( k,{\dim }_{K}{\widetilde{H}}_{k}\left( {\Delta ;K}\right) \leq {\dim }_{K}{\widetilde{H}}_{k}\left( {\Gamma ;K}\right) \) .
Proof. By considering an extension field of \( K \) if necessary, we may assume that \( K \) is infinite. Let \( {\Delta }^{e} \) denote the exterior algebraic shifted complex of \( \Delta \) . By Proposition 11.4.7 we have \( {\widetilde{H}}_{k}\left( {\Delta ;K}\right) \cong {\widetilde{H}}_{k}\left( {{\Delta }^{e};K}\right) \) . Thus we need to show that \( {\dim }_{K}{\widetilde{H}}_{k}\left( {{\Delta }^{e};K}\right) \leq {\dim }_{K}{\widetilde{H}}_{k}\left( {{\Gamma }^{e};K}\right) \) for all \( k \) . By using (11.6) one has \( {\beta }_{in}\left( {I}_{\Delta }\right) = {\dim }_{K}{\widetilde{H}}_{n - i - 2}\left( {\Delta ;K}\right) \) . Hence it remains to show that \( {\beta }_{in}\left( {I}_{{\Delta }^{e}}\right) \leq \) \( {\beta }_{in}\left( {I}_{{\Gamma }^{e}}\right) \) for all \( i \) . Inequality (11.5) says that \( {m}_{ \leq i}\left( {{J}_{{\Delta }^{e}}, j}\right) \geq {m}_{ \leq i}\left( {{J}_{{\Gamma }^{e}}, j}\right) \) for all \( i \) and \( j \) . It then follows from Corollary 11.3.9 that \( {\beta }_{{ii} + j}\left( {I}_{{\Delta }^{e}}\right) \leq {\beta }_{{ii} + j}\left( {I}_{{\Gamma }^{e}}\right) \) for all \( i \) and \( j \) . Thus in particular \( {\beta }_{\text{in }}\left( {I}_{{\Delta }^{e}}\right) \leq {\beta }_{\text{in }}\left( {I}_{{\Gamma }^{e}}\right) \) for all \( i \) .
Let \( W \subset \left\lbrack n\right\rbrack \smallsetminus \{ i, j\} \), and let
\[
{\Delta }_{1} = {\Delta }_{W\cup \{ i\} },\;{\Delta }_{2} = {\Delta }_{W\cup \{ j\} },\;{\Gamma }_{1} = {\Gamma }_{W\cup \{ i\} }\;\text{ and }\;{\Gamma }_{2} = {\Gamma }_{W\cup \{ j\} }.
\]
Then
\[
{\Delta }_{1} \cap {\Delta }_{2} = {\Gamma }_{1} \cap {\Gamma }_{2} = {\Delta }_{W} = {\Gamma }_{W},\;\text{ and }\;{\Gamma }_{1} \cup {\Gamma }_{2} = {\operatorname{Shift}}_{ij}\left( {{\Delta }_{1} \cup {\Delta }_{2}}\right) .\left( {11.7}\right)
\]
The reduced Mayer-Vietoris exact sequence of \( {\Delta }_{1} \) and \( {\Delta }_{2} \) and that of \( {\Gamma }_{1} \) and \( {\Gamma }_{2} \) (see Proposition 5.1.8) is given by
\[
\cdots \rightarrow \;{\widetilde{H}}_{k}\left( {{\Delta }_{W};K}\right) \;\xrightarrow[]{{\partial }_{1, k}}{\widetilde{H}}_{k}\left( {{\Delta }_{1};K}\right) \oplus {\widetilde{H}}_{k}\left( {{\Delta }_{2};K}\right)
\]
\[
\overset{{\partial }_{2, k}}{ \rightarrow }{\widetilde{H}}_{k}\left( {{\mathit{Δ}}_{1} \cup {\mathit{Δ}}_{2};K}\right) \overset{{\partial }_{3, k}}{ \rightarrow }\;{\widetilde{H}}_{k - 1}\left( {{\mathit{Δ}}_{W};K}\right) \;\overset{{\partial }_{1, k - 1}}{ \rightarrow }\cdots .
\]
and
\[
\cdots \rightarrow \;{\widetilde{H}}_{k}\left( {{\Gamma }_{W};K}\right) \;\xrightarrow[]{{\partial }_{1, k}^{\prime }}{\widetilde{H}}_{k}\left( {{\Gamma }_{1};K}\right) \oplus {\widetilde{H}}_{k}\left( {{\Gamma }_{2};K}\right)
\]
\[
\overset{{\partial }_{2, k}^{\prime }}{ \rightarrow }{\widetilde{H}}_{k}\left( {{\Gamma }_{1} \cup {\Gamma }_{2};K}\right) \overset{{\partial }_{3, k}^{\prime }}{ \rightarrow }\;{\widetilde{H}}_{k - 1}\left( {{\Gamma }_{W};K}\right) \;\overset{{\partial }_{1, k - 1}^{\prime }}{ \rightarrow }\cdots .
\]
Since \( {\Delta }_{W} = {\Gamma }_{W} \) we can compare \( \operatorname{Ker}\left( {\partial }_{1, k}^{\prime }\right) \) and \( \operatorname{Ker}\left( {\partial }_{1, k}\right) \) .
Lemma 11.3.12. Suppose that \( j = i + 1 \) . Then one has
\[
\operatorname{Ker}\left( {\partial }_{1, k}^{\prime }\right) \subset \operatorname{Ker}\left( {\partial }_{1, k}\right)
\]
for all \( k \) .
Proof. Let \( \left\lbrack a\right\rbrack \in \operatorname{Ker}\left( {\partial }_{1, k}^{\prime }\right) \), where \( a \in {\widetilde{\mathcal{C}}}_{k}\left( {\Gamma }_{W}\right) \) . Since \( \left( {\left\lbrack a\right\rbrack ,\left\lbrack a\right\rbrack }\right) \in {\widetilde{H}}_{k}\left( {{\Gamma }_{1};K}\right) \oplus \) \( {\widetilde{H}}_{k}\left( {{\Gamma }_{2};K}\right) \) vanishes (in particular, \( \left\lbrack a\right\rbrack \in {\widetilde{H}}_{k}\left( {{\Gamma }_{1};K}\right) \) vanishes), there exists \( u \in {\widetilde{\mathcal{C}}}_{k + 1}\left( {\Gamma }_{1}\right) \) with \( \partial \left( u\right) = a \) . Say,
\[
u = \mathop{\sum }\limits_{{\left| F\right| = k + 1, i \notin F, F\cup \{ i\} \in {\Gamma }_{1}}}{a}_{F\cup \{ i\} }{\mathbf{e}}_{F\cup \{ i\} } + \mathop{\sum }\limits_{{\left| G\right| = k + 2, G \in {\Delta }_{W}}}{b}_{G}{\mathbf{e}}_{G},
\]
(11.8)
where \( {a}_{F\cup \{ i\} },{b}_{G} \in K \) .
Let \( F \subset W \) with \( F \cup \{ i\} \in {\Gamma }_{1} \) . By the definition of \( {\operatorname{Shift}}_{ij} \) it follows immediately that \( F \cup \{ i\} \in {\Delta }_{1} \) and \( F \cup \{ j\} \in {\Delta }_{2} \) . Thus \( F \cup \{ j\} \in {\Gamma }_{2} \) . In particular, \( u \in {\widetilde{\mathcal{C}}}_{k + 1}\left( {\Delta }_{1}\right) \) with \( \partial \left( u\right) = a \) . Hence \( \left\lbrack a\right\rbrack \in {\widetilde{H}}_{k}\left( {{\Delta }_{1};K}\right) \) vanishes.
Since \( a \in {\widetilde{\mathcal{C}}}_{k}\left( {\Gamma }_{W}\right) \) is a linear combination of those basis elements \( {\mathbf{e}}_{F} \) with \( F \in \Gamma, F \subset W \) and \( \left| F\right| = k + 1 \) and since \( j = i + 1 \), it follows that \( \partial \left( v\right) = a \) , where \( v \in {\widetilde{\mathcal{C}}}_{k + 1}\left( {\Delta }_{2}\right) \) is the element
\[
v = \mathop{\sum }\limits_{{\left| F\right| = k + 1, i \notin F, F\cup \{ i\} \in {\Gamma }_{1}}}{a}_{F\cup \{ i\} }{\mathbf{e}}_{F\cup \{ j\} } + \mathop{\sum }\limits_{{\left| G\right| = k + 2, G \in {\Delta }_{W}}}{b}_{G}{\mathbf{e}}_{G}.
\]
Thus \( \left\lbrack a\right\rbrack \in {\widetilde{H}}_{k}\left( {{\Delta }_{2};K}\right) \) vanishes.
These calculations now show that \( \left( {\left\lbrack a\right\rbrack ,\left\lbrack a\right\rbrack }\right) \in {\widetilde{H}}_{k}\left( {{\Delta }_{1};K}\right) \bigoplus {\widetilde{H}}_{k}\left( {{\Delta }_{2};K}\right) \) vanishes, as required.
Suppose again that \( j = i + 1 \) . It then follows that
\[
{\dim }_{K}\left( {\operatorname{Ker}\left( {\partial }_{1, k}\right) }\right) \geq {\dim }_{K}\left( {\operatorname{Ker}\left( {\partial }_{1, k}^{\prime }\right) }\right)
\]
\[
{\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{1, k}\right) }\right) \leq {\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{1, k}^{\prime }\right) }\right)
\]
\[
{\dim }_{K}\left( {\operatorname{Ker}\left( {\partial }_{2, k}\right) }\right) \leq {\dim }_{K}\left( {\operatorname{Ker}\left( {\partial }_{2, k}^{\prime }\right) }\right)
\]
(11.9)
On the other hand,
\[
{\dim }_{K}\left( {{\widetilde{H}}_{k}\left( {{\Delta }_{1} \cup {\Delta }_{2};K}\right) }\right) = {\dim }_{K}\left( {\operatorname{Ker}\left( {\partial }_{3, k}\right) }\right) + {\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{3, k}\right) }\right)
\]
(11.10)
\[
{\dim }_{K}\left( {{\widetilde{H}}_{k}\left( {{\Gamma }_{1} \cup {\Gamma }_{2};K}\right) }\right) = {\dim }_{K}\left( {\operatorname{Ker}\left( {\partial }_{3, k}^{\prime }\right) }\right) + {\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{3, k}^{\prime }\right) }\right) .
\]
(11.11)
Lemma 11.3.11 together with (11.7) guarantees that
\[
{\dim }_{K}\left( {{\widetilde{H}}_{k}\left( {{\Delta }_{1} \cup {\Delta }_{2};K}\right) }\right) \leq {\dim }_{K}\left( {{\widetilde{H}}_{k}\left( {{\Gamma }_{1} \cup {\Gamma }_{2};K}\right) }\right) .
\]
(11.12)
Since \( \operatorname{Im}\left( {\partial }_{3, k}\right) = \operatorname{Ker}\left( {\partial }_{1, k - 1}\right) \) and \( \operatorname{Im}\left( {\partial }_{3, k}^{\prime }\right) = \operatorname{Ker}\left( {\partial }_{1, k - 1}^{\prime }\right) \), Lemma 11.3.12
yields
\[
{\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{3, k}\right) }\right) \geq {\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{3, k}^{\prime }\right) }\right)
\]
(11.13)
Since \( \operatorname{Im}\left( {\partial }_{2, k}\right) = \operatorname{Ker}\left( {\partial }_{3, k}\right) \) and \( \operatorname{Im}\left( {\partial }_{2, k}^{\prime }\right) = \operatorname{Ker}\left( {\partial }_{3, k}^{\prime }\right) \), it follows from formula (11.10) and (11.11) together with (11.12) and (11.13) that
\[
{\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{2, k}\right) }\right) \leq {\dim }_{K}\left( {\operatorname{Im}\left( {\partial }_{2, k}^{\prime }\right) }\right)
\]
(11.14)
Finally, it follows from the reduced Mayer-Vietoris exact sequence of \( {\Delta }_{1} \) and \( {\Delta }_{2} \) and that of \( {\Gamma }_{1} \) and \( {\Gamma }_{2} \) together with (11.9) and (11.10) that
\[
{\dim }_{K}\left( {{\widetilde{H}}_{k}\left( {{\Delta }_{1};K}\right) \oplus {\widetilde{H}}_{k}\left( {{\Delta }_{2};K}\right) }\right) \leq {\dim }_{K}\left( {{\widetilde{H}}_{k}\left( {{\Gamma }_{1};K}\right) \oplus {\widetilde{H}}_{k}\left( {{\Gamma }_{2};K}\right) }\right) .
\]
(11.15)
Now we are ready to prove the crucial
Lemma 11.3.13. Fix \( 1 \leq p < q \leq n \) . Let \( \Delta \) be a simplicial complex on \( \left\lbrack n\right\rbrack \) and \( \Gamma = {\operatorname{Shift}}_{pq}\left( \Delta \right) \) . Then
\[
{\beta }_{{ii} + j}\left( {I}_{\Delta }\right) \leq {\beta }_{{ii} + j}\left( {I}_{\Gamma }\right)
\]
for all \( i \) and \( j \) .
Proof. Let \( \pi \) be a permutation on \( \left\lbrack n\right\rbrack \) with \( \pi \left( p\right) < \pi \left( q\right) \) . Then \( \pi \) naturally induce the automorphism of \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) by setting \( {x}_{i} \mapsto {x}_{\pi \left( i\right) } \) . Write \( \pi \left( \Delta \right) \) for the simplicial complex \( \{ \pi \left( F\right) : F \in \Delta \} \) on \( \left\lbrack n\right\rbrack \) . Then
\[
\pi \left( {I}_{{\text{Shift }}_{pq}\left( \Delta \right) }\right
|
Lemma 11.3.11. For all \( k,{\dim }_{K}{\widetilde{H}}_{k}\left( {\Delta ;K}\right) \leq {\dim }_{K}{\widetilde{H}}_{k}\left( {\Gamma ;K}\right) \) .
|
By considering an extension field of \( K \) if necessary, we may assume that \( K \) is infinite. Let \( {\Delta }^{e} \) denote the exterior algebraic shifted complex of \( \Delta \) . By Proposition 11.4.7 we have \( {\widetilde{H}}_{k}\left( {\Delta ;K}\right) \cong {\widetilde{H}}_{k}\left( {{\Delta }^{e};K}\right) \) . Thus we need to show that \( {\dim }_{K}{\widetilde{H}}_{k}\left( {{\Delta }^{e};K}\right) \leq {\dim }_{K}{\widetilde{H}}_{k}\left( {{\Gamma }^{e};K}\right) \) for all \( k \) . By using (11.6) one has \( {\beta }_{in}\left( {I}_{\Delta }\right) = {\dim }_{K}{\widetilde{H}}_{n - i - 2}\left( {\Delta ;K}\right) \) . Hence it remains to show that \( {\beta }_{in}\left( {I}_{{\Delta }^{e}}\right) \leq \) \( {\beta }_{in}\left( {I}_{{\Gamma }^{e}}\right) \) for all \( i \) . Inequality (11.5) says that \( {m}_{ \leq i}\left( {{J}_{{\Delta }^{e}}, j}\right) \geq {m}_{ \leq i}\left( {{J}_{{\Gamma }^{e}}, j}\right) \) for all \( i \) and \( j \) . It then follows from Corollary 11.3.9 that \( {\beta }_{{ii} + j}\left( {I}_{{\Delta }^{e}}\right) \leq {\beta }_{{ii} + j}\left( {I}_{{\Gamma }^{e}}\right) \) for all \( i \) and \( j \) . Thus in particular \( {\beta }_{\text{in }}\left( {I}_{{\Delta }^{e}}\right) \leq {\beta }_{\text{in }}\left( {I}_{{\Gamma }^{e}}\right) \) for all \( i \).
|
Theorem 7. For spaces, connectivity is preserved by surjective mappings. That is, if \( \left\lbrack {X,\mathcal{O}}\right\rbrack \) is connected, and \( f : X \rightarrow Y \) is a mapping, then \( \left\lbrack {Y,{\mathcal{O}}^{\prime }}\right\rbrack \) is connected.
Proof. Suppose not. Then \( Y = U \cup V \), where \( U \) and \( V \) are disjoint, open, and nonempty. Therefore \( X = {f}^{-1}\left( U\right) \cup {f}^{-1}\left( V\right) \), and the latter sets are disjoint, open, and nonempty, which is impossible.
Theorem 8. For sets, connectivity is preserved by surjective mappings.
Proof. By the preceding two theorems.
Theorem 9. Every closed interval in \( \mathbf{R} \) is connected.
Proof. This turns out to be the \( n \) th formulation of the continuity of \( \mathbf{R} \) . Suppose that \( \left\lbrack {a, b}\right\rbrack = H \cup K \) (separated), with \( a \in H \) . Let
\[
M = \{ x \mid x = a\text{ or }\left\lbrack {a, x}\right\rbrack \subset H\} .
\]
Then \( M \) is bounded above. Let \( c \) be the least upper bound of \( M \) . Then \( c \in \left\lbrack {a, b}\right\rbrack, c \) is a limit point of \( H, c \notin K \), and so \( c \in H \) . If \( c < b \), then \( c \) is a limit point of \( K \), which contradicts the hypothesis for \( H \) and \( K \) . Therefore \( c = b, H = \left\lbrack {a, b}\right\rbrack \), and \( K = \varnothing \) . Thus \( \left\lbrack {a, b}\right\rbrack \) is not the union of any two nonempty separated sets.
Theorem 10. If \( H \) and \( K \) are separated, then every connected subset \( M \) of \( H \cup K \) lies either in \( H \) or in \( K \) .
Proof. If not, \( M = \left( {M \cap H}\right) \cup \left( {M \cap K}\right) \), where the two sets on the right are separated and nonempty. (Evidently, if \( H \) and \( K \) are separated, and \( {H}^{\prime } \subset H \) and \( {K}^{\prime } \subset K \), then \( {H}^{\prime } \) and \( {K}^{\prime } \) are separated.)
Theorem 11. Every pathwise connected set is connected.
Proof. Suppose that \( M \) is pathwise connected but not connected, so that \( M = H \cup K \) (separated and nonempty). Take \( P \in H, Q \in K \) ; and let \( p \) be a path from \( P \) to \( Q \) in \( M \) . By Theorems 8 and 9, the image \( \left| p\right| = p\left( \left\lbrack {a, b}\right\rbrack \right) \) \( \subset M \) is connected. By Theorem \( {10},\left| p\right| \) lies either in \( H \) or in \( K \), which is false.
Theorem 12. Let \( K \) be a complex. Then the following conditions are equivalent:
(1) \( K \) is connected.
(2) \( \left| K\right| \) is pathwise connected.
(3) \( \left| K\right| \) is connected.
Proof. (1) \( \Rightarrow \) (2), by Theorem 4. (2) \( \Rightarrow \) (3), by Theorem 11. Suppose, finally, that (1) is false, so that \( K = {K}_{1} \cup {K}_{2} \), where \( {K}_{1} \) and \( {K}_{2} \) are disjoint nonempty complexes. From Condition K. 3 of the definition of a complex,
it follows that no point \( v \) of \( \left| K\right| \) is a limit point of the union of the simplexes of \( K \) that do not contain \( v \) . Therefore \( \left| {K}_{1}\right| \) and \( \left| {K}_{2}\right| \) are separated, and \( \left| K\right| \) is not connected. Thus \( \left( 3\right) \Rightarrow \left( 1\right) \) .
An \( {arc} \) is a 1-cell, that is, a set homeomorphic to a closed linear interval. A broken line is a polyhedral arc.
Theorem 13. In \( {\mathbf{R}}^{n} \), every connected open set \( U \) is broken-line-wise connected.
Proof. Let \( P \in U \), and let \( V \) the union of \( \{ P\} \) and the set of all points of \( U \) that can be joined to \( P \) by broken lines lying in \( U \) . It is then easy to show that both \( U \) and \( U - V \) are open. If \( U - V \neq \varnothing \), then \( U \) is the union of two disjoint nonempty open sets, which is false.
We now resume the discussion of connectivity in topological spaces.
Theorem 14. Let \( G \) be a collection of connected sets, with a point \( P \) in common. Then the union \( {G}^{ * } \) of the elements of \( G \) is connected.
Proof. Suppose that \( {G}^{ * } = H \cup K \) (separated and nonempty), with \( P \in H \) . Since each \( g \in G \) is connected, each \( g \) lies in \( H \) or in \( K \) . Therefore \( g \subset H \) , \( {G}^{ * } \subset H \), and \( K = \varnothing \), which contradicts the hypothesis for \( K \) .
Theorem 15. If \( M \) is connected, and \( M \subset L \subset \bar{M} \), then \( L \) is connected.
Proof. Suppose that \( L = H \cup K \) (separated and nonempty). Let \( {H}^{\prime } = M \) \( \cap H \) and \( {K}^{\prime } = M \cap K \), so that \( M = {H}^{\prime } \cup {K}^{\prime } \) . Then \( {H}^{\prime } \) and \( {K}^{\prime } \) are separated. Now \( H \) contains a point \( P \) of \( L \), and \( P \) is a point or a limit point of \( M \) . Therefore \( P \) is a point or a limit point either of \( {H}^{\prime } \) or of \( {K}^{\prime } \) . But \( P \) is neither a point nor a limit point of \( {K}^{\prime } \subset K \) . Therefore \( P \) is a point or a limit point of \( {H}^{\prime } \) . Therefore \( {H}^{\prime } \neq \varnothing \) . Similarly, \( {K}^{\prime } \neq \varnothing \) . Therefore \( M \) is not connected, which is false.
Let \( M \) be a set, and let \( P \in M \) . The component \( C\left( {M, P}\right) \) of \( M \) that contains \( P \) is the union of all connected subsets of \( M \) that contain \( P \) . (By Theorem 14, every set \( C\left( {M, P}\right) \) is connected.)
Theorem 16. Every two (different) components of the same set are disjoint.
Theorem 17. If \( M \subset N \), then every component of \( M \) lies in a component of \( N \) .
There is a gross difference between connectivity and pathwise connectivity. We have shown (Theorem 11) that the latter implies the former, but the converse is false. For example, let \( M \) be the graph of \( f\left( x\right) = \sin \left( {1/x}\right) \) \( \left( {0 < x \leq 1/\pi }\right) \), in \( {\mathbf{R}}^{2} \), together with the points \( \left( {0,1}\right) \) and \( \left( {0, - 1}\right) \) . It can be shown, with the aid of Theorems 9,14,8, and 15, that \( M \) is connected. But it can also be shown that there is no path in \( M \) from \( \left( {0,1}\right) \) (or \( \left( {0, - 1}\right) \) ) to any other point of \( M \) . There are worse examples. E.g., there is a compact connected set in \( {\mathbf{R}}^{2} \) in which all paths are constant. See B. Knaster [K] or the author [M]. From the viewpoint of pathwise connectivity, such a set is indistinguishable from a Cantor set.
Problem set 1
## Prove or disprove:
1. A closed set is connected if and only if it is not the union of any two disjoint nonempty closed sets.
2. An open set is connected if and only if it is not the union of any two disjoint nonempty open sets.
3. Every open interval \( \left( {a, b}\right) = \{ x \mid a < x < b\} \) in \( \mathbf{R} \) is connected. Similarly for half-open intervals \( (a, b\rbrack = \{ x \mid a < x \leq b\} \) .
4. Let \( f \) be a continuous function \( (a, b\rbrack \rightarrow \mathbf{R} \) . Then the graph of \( f \) is connected.
5. The set \( M \) described at the end of Section 1 is connected.
6. No nonconstant path in \( M \) contains the point \( \left( {0,1}\right) \) .
7. Let \( M \) be a pathwise connected set in \( {\mathbf{R}}^{2} \), let \( P \in M \), and suppose that \( M - P \) is connected. Then \( M - P \) is pathwise connected.
8. Let \( U \) be a connected open set in \( {\mathbf{R}}^{2} \) . Then \( \bar{U} \) is pathwise connected.
9. Let \( U \) be as in Problem 8. Then there is at least one point \( P \) of \( \mathrm{{Fr}}U \) such that \( U \cup \{ P\} \) is pathwise connected. In fact, the set of all such points \( P \) is dense in Fr \( U \) .
10. Let \( \left\{ {{P}_{1},{P}_{2},\ldots }\right\} \) be a countable set which is dense in the unit circle \( C \) in \( {\mathbf{R}}^{2} \) . For each \( i \), let the polar coordinates of \( {P}_{i} \) be \( \left( {1,{\theta }_{i}}\right) \) ; and let \( {I}_{i} \) be the linear interval from \( {P}_{i} \) to \( \left( {1/i,{\theta }_{i}}\right) \) . Let
\[
M = \{ \left( {0,0}\right) \} \cup \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{I}_{i}
\]
Then the components of \( M \) are \( \{ \left( {0,0}\right) \} \) and the sets \( {I}_{i} \) .
11. In a metric space \( \left\lbrack {X, d}\right\rbrack \), for every two separated sets \( H, K \) there is an \( \varepsilon > 0 \) such that if \( P \in H \) and \( Q \in K \), then \( d\left( {P, Q}\right) \geq \varepsilon \) .
12. Reconsider Problem 11, for the case in which \( H \) is compact.
13. In a metric space, every two separated sets lie in disjoint open sets. (Note that this is not a corollary of Theorem 5.)
14. In a metric space, let \( {M}_{1},{M}_{2},\ldots \) be a sequence of nonempty connected sets; and suppose that the sequence is nested, in the sense that \( {M}_{i + 1} \subset {M}_{i} \) for each \( i \) . Then \( \mathop{\bigcap }\limits_{{i = 1}}^{\infty }{M}_{i} \) is connected.
15. Let \( M \) be a compact set, in a metric space. Let \( P \) and \( Q \) be points of \( M \) . Suppose that \( M \) is not the union of any two disjoint closed sets \( H \) and \( K \) , containing \( P \) and \( Q \) respectively. Then \( M \) contains a compact connected set which contains \( P \) and \( Q \) .
16. In a metric space, let \( P \) and \( Q \) be points, and let \( {M}_{1},{M}_{2},\ldots \) be a nested sequence of compact sets, such that (1) \( P, Q \in {M}_{i} \) for each \( i \), and (2) no set \( {M}_{i} \) is the union of two disjoint closed sets \( H \) and \( K \), containing \( P \) and \( Q \) respectively. Then \( \cap {M}_{i} \) has Properties (1) and (2).
17. Let \( K \) be a complex, such that \( \left| K\right| \) is an \( n \) -manifold. Then \( K \) is called a triangulation of \( \left| K\right| \), and is called a triangulate
|
Theorem 7. For spaces, connectivity is preserved by surjective mappings. That is, if \( \left\lbrack {X,\mathcal{O}}\right\rbrack \) is connected, and \( f : X \rightarrow Y \) is a mapping, then \( \left\lbrack {Y,{\mathcal{O}}^{\prime }}\right\rbrack \) is connected.
|
Suppose not. Then \( Y = U \cup V \), where \( U \) and \( V \) are disjoint, open, and nonempty. Therefore \( X = {f}^{-1}\left( U\right) \cup {f}^{-1}\left( V\right) \), and the latter sets are disjoint, open, and nonempty, which is impossible.
|
Theorem 3.1. A vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies the system (3.2) if and only if there exists \( {\bar{x}}_{n} \) such that \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1},{\bar{x}}_{n}}\right) \) satisfies \( {Ax} \leq b \) .
Proof. We already remarked the "if" statement. For the converse, assume there is a vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfying (3.2). Note that the first set of inequalities in (3.2) can be rewritten as
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{x}_{j} - {b}_{k}^{\prime } \leq {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{x}_{j},\;i \in {I}^{ + }, k \in {I}^{ - }.
\]
(3.3)
Let \( l : = \mathop{\max }\limits_{{k \in {I}^{ - }}}\{ \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{\bar{x}}_{j} - {b}_{k}^{\prime }\} \) and \( u : = \mathop{\min }\limits_{{i \in {I}^{ + }}}\{ {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{\bar{x}}_{j}\} , \) where we define \( l \mathrel{\text{:=}} - \infty \) if \( {I}^{ - } = \varnothing \) and \( u \mathrel{\text{:=}} + \infty \) if \( {I}^{ + } = \varnothing \) . Since \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies (3.3), we have that \( l \leq u \) . Therefore, for any \( {\bar{x}}_{n} \) such that \( l \leq {\bar{x}}_{n} \leq u \), the vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n}}\right) \) satisfies the system (3.1), which is equivalent to \( {Ax} \leq b \) .
Therefore, the problem of finding a solution to \( {Ax} \leq b \) is reduced to finding a solution to (3.2), which is a system of linear inequalities in \( n - 1 \) variables. Fourier's elimination method is:
Given a system of linear inequalities \( {Ax} \leq b \), let \( {A}^{n} \mathrel{\text{:=}} A,{b}^{n} \mathrel{\text{:=}} b \) ;
For \( i = n,\ldots ,1 \), eliminate variable \( {x}_{i} \) from \( {A}^{i}x \leq {b}^{i} \) with the above procedure to obtain system \( {A}^{i - 1}x \leq {b}^{i - 1} \) .
System \( {A}^{1}x \leq {b}^{1} \), which involves variable \( {x}_{1} \) only, is of the type, \( {x}_{1} \leq {b}_{p}^{1} \) , \( p \in P, - {x}_{1} \leq {b}_{q}^{1}, q \in N \), and \( 0 \leq {b}_{i}^{1}, i \in Z \) .
System \( {A}^{0}x \leq {b}^{0} \) has the following inequalities: \( 0 \leq {b}_{pq}^{0} \mathrel{\text{:=}} {b}_{p}^{1} + {b}_{q}^{1} \) , \( p \in P, q \in N,0 \leq {b}_{i}^{0} \mathrel{\text{:=}} {b}_{i}^{1}, i \in Z. \)
Applying Theorem 3.1, we obtain that \( {Ax} \leq b \) is feasible if and only if \( {A}^{0}x \leq {b}^{0} \) is feasible, and this happens when the \( {b}_{pq}^{0} \) and \( {b}_{i}^{0} \) are all nonnegative.
## Remark 3.2.
(i) At each iteration, Fourier’s method removes \( \left| {I}^{ + }\right| + \left| {I}^{ - }\right| \) inequalities and adds \( \left| {I}^{ + }\right| \times \left| {I}^{ - }\right| \) inequalities, hence the number of inequalities may roughly be squared at each iteration. Thus, after eliminating \( p \) variables, the number of inequalities may be exponential in p.
(ii) If matrix \( A \) and vector \( b \) have only rational entries, then all coefficients in (3.2) are rational.
(iii) Every inequality of \( {A}^{i}x \leq {b}^{i} \) is a nonnegative combination of inequalities of \( {Ax} \leq b \) .
Example 3.3. Consider the system \( {A}^{3}x \leq {b}^{3} \) of linear inequalities in three variables
\[
- {x}_{2} \leq - 1
\]
\[
\text{-}{x}_{1} - {x}_{2}
\]
\[
\begin{matrix} & - & {x}_{1} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & \end{matrix}
\]
\[
\text{-}{x}_{2} - {x}_{3} \leq - 3
\]
\[
{x}_{1} + {x}_{2} + {x}_{3} \leq 6
\]
Applying Fourier’s procedure to eliminate variable \( {x}_{3} \), we obtain the system \( {A}^{2}x \leq {b}^{2} \) :
\[
\text{-}{x}_{1} \leq - 1
\]
\[
\text{-}{x}_{2} \leq - 1
\]
\[
\begin{matrix} & - & {x}_{1} & - & {x}_{2} & & & \leq & - 3 \end{matrix}
\]
\[
{x}_{1} + {x}_{2}\; \leq
\]
where the last three inequalities are obtained from \( {A}^{3}x \leq {b}^{3} \) by summing the third, fifth, and sixth inequality, respectively, with the last inequality. Eliminating variable \( {x}_{2} \), we obtain \( {A}^{1}x \leq {b}^{1} \) 
Finally \( {A}^{0}x \leq {b}^{0} \) is 
Therefore \( {A}^{0}x \leq {b}^{0} \) is feasible. A solution can now be found by backward substitution. System \( {A}^{1}x \leq {b}^{1} \) is equivalent to \( 1 \leq {x}_{1} \leq 3 \) . Since \( {x}_{1} \) can take any value in this interval, choose \( {\bar{x}}_{1} = 3 \) . Substituting \( {x}_{1} = 3 \) in \( {A}^{2}x \leq {b}^{2} \), we obtain \( 1 \leq {x}_{2} \leq 2 \) . If we choose \( {\bar{x}}_{2} = 1 \) and substitute \( {x}_{2} = 1 \) and \( {x}_{1} = 3 \) in \( {A}^{3}x \leq {b}^{3} \), we finally obtain \( {x}_{3} = 2 \) . This gives the solution \( \bar{x} = \left( {3,1,2}\right) \) . ∎
## 3.2 Farkas' Lemma
Next we present Farkas' lemma, which gives a simple necessary and sufficient condition for the existence of a solution to a system of linear inequalities. Farkas' lemma is the analogue of the Fredholm alternative for a system of linear equalities (Theorem 1.19).
Theorem 3.4 (Farkas’ Lemma). A system of linear inequalities \( {Ax} \leq b \) is infeasible if and only if the system \( {uA} = 0,{ub} < 0, u \geq 0 \) is feasible.
Proof. Assume \( {uA} = 0,{ub} < 0, u \geq 0 \) is feasible. Then \( 0 = {uAx} \leq {ub} < 0 \) for any \( x \) satisfying \( {Ax} \leq b \) . It follows that \( {Ax} \leq b \) is infeasible and this proves the "if" part.
We now prove the "only if" part. Assume that \( {Ax} \leq b \) has no solution. Apply the Fourier elimination method to \( {Ax} \leq b \) to eliminate all variables \( {x}_{n},\ldots ,{x}_{1} \) . System \( {A}^{0}x \leq {b}^{0} \) is of the form \( 0 \leq {b}^{0} \), and the system \( {Ax} \leq b \) has a solution if and only if all the entries of \( {b}^{0} \) are nonnegative. Since \( {Ax} \leq b \) has no solution, it follows that \( {b}^{0} \) has a negative entry, say \( {b}_{i}^{0} < 0 \) .
By Remark 3.2(iii), every inequality of the system \( 0 \leq {b}^{0} \) is a nonnegative combination of inequalities of \( {Ax} \leq b \) . In particular, there exists some vector \( u \geq 0 \) such that the inequality \( 0 \leq {b}_{i}^{0} \) is identical to \( {uAx} \leq {ub} \) . That is, \( u \geq 0,{uA} = 0,{ub} = {b}_{i}^{0} < 0 \) is feasible.
Farkas' lemma is sometimes referred to as a theorem of the alternative because it can be restated as follows.
Exactly one among the system \( {Ax} \leq b \) and the system \( {uA} = 0 \) , \( {ub} < 0 \) ,
\( u \geq 0 \) is feasible.
The following is Farkas' lemma for systems of equations in nonnegative variables.
Theorem 3.5. The system \( {Ax} = b, x \geq 0 \) is feasible if and only if \( {ub} \leq 0 \) for every \( u \) satisfying \( {uA} \leq 0 \) .
Proof. If \( {Ax} = b, x \geq 0 \) is feasible, then \( {ub} \leq 0 \) for every \( u \) satisfying \( {uA} \leq 0 \) . For the converse, suppose that \( {Ax} = b, x \geq 0 \) is infeasible. Then the system \( {Ax} \leq b, - {Ax} \leq - b, - x \leq 0 \) is infeasible. By Theorem 3.4 there exists \( \left( {v,{v}^{\prime }, w}\right) \geq 0 \) such that \( {vA} - {v}^{\prime }A - w = 0 \) and \( {vb} - {v}^{\prime }b < 0 \) . The vector \( u \mathrel{\text{:=}} {v}^{\prime } - v \) satisfies \( {ub} > 0 \) and since \( w \geq 0, u \) satisfies \( {uA} \leq 0 \) .
We finally present a more general, yet still equivalent, form of Farkas' lemma.
Theorem 3.6. The system \( {Ax} + {By} \leq f,{Cx} + {Dy} = g, x \geq 0 \) is feasible if and only if \( {uf} + {vg} \geq 0 \) for every \( \left( {u, v}\right) \) satisfying \( {uA} + {vC} \geq 0 \) , \( {uB} + {vD} = 0, u \geq 0 \) .
Theorem 3.6 can be derived from
|
Theorem 3.1. A vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies the system (3.2) if and only if there exists \( {\bar{x}}_{n} \) such that \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1},{\bar{x}}_{n}}\right) \) satisfies \( {Ax} \leq b \) .
|
Proof. We already remarked the "if" statement. For the converse, assume there is a vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfying (3.2). Note that the first set of inequalities in (3.2) can be rewritten as
\[
\mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{x}_{j} - {b}_{k}^{\prime } \leq {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{x}_{j},\;i \in {I}^{ + }, k \in {I}^{ - }.
\]
(3.3)
Let \( l : = \mathop{\max }\limits_{{k \in {I}^{ - }}}\{ \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{kj}^{\prime }{\bar{x}}_{j} - {b}_{k}^{\prime }\} \) and \( u : = \mathop{\min }\limits_{{i \in {I}^{ + }}}\{ {b}_{i}^{\prime } - \mathop{\sum }\limits_{{j = 1}}^{{n - 1}}{a}_{ij}^{\prime }{\bar{x}}_{j}\} , \) where we define \( l \mathrel{\text{:=}} - \infty \) if \( {I}^{ - } = \varnothing \) and \( u \mathrel{\text{:=}} + \infty \) if \( {I}^{ + } = \varnothing \) . Since \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n - 1}}\right) \) satisfies (3.3), we have that \( l \leq u \) . Therefore, for any \( {\bar{x}}_{n} \) such that \( l \leq {\bar{x}}_{n} \leq u \), the vector \( \left( {{\bar{x}}_{1},\ldots ,{\bar{x}}_{n}}\right) \) satisfies the system (3.1), which is equivalent to \( {Ax} \leq b \) .
|
Corollary 4.34. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and suppose \( Y \) is barreled. Then any linear map \( T \) from \( X \) onto \( Y \) is nearly open.
Proof. Use \( {\mathcal{B}}_{0} = \) all convex, balanced neighborhoods of 0 in \( X \) . If \( B \in {\mathcal{B}}_{0} \), and \( x \in X \), then \( x \in {cB} \Rightarrow T\left( x\right) \in T\left( {cB}\right) = {cT}\left( B\right) \), so \( T\left( B\right) \) is convex, balanced, and absorbent (since \( T \) is onto). Hence \( T{\left( B\right) }^{ - } \) is closed, convex (Proposition 2.13), balanced (Proposition 2.5), and absorbent, that is \( T{\left( B\right) }^{ - } \) is a barrel in \( Y \) . Since \( Y \) is assumed to be barreled, \( T{\left( B\right) }^{ - } \) is a neighborhood of 0 . Hence \( T \) is nearly open by Corollary 4.33.
(There are certain points where using the barreled condition seems almost like cheating. This is one of them.)
We can now prove the open mapping theorem in our Hausdorff, locally convex space context.
Theorem 4.35 (Open Mapping Theorem). Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and \( T : X \rightarrow Y \) is a continuous linear map.
(a) If \( T \) is onto and \( Y \) is barreled, then \( T \) is nearly open.
(b) If \( T \) is nearly open and \( X \) is a Fréchet space, then \( T \) is open (and onto).
Proof. Part (a) is immediate from Corollary 4.34. For part (b), assume \( T \) is nearly open and \( X \) is a Fréchet space. Let \( U \) be a neighborhood of 0 in \( X \) . Choose a base \( {\mathcal{B}}_{0} = \left\{ {{B}_{1},{B}_{2},\ldots }\right\} \) in accordance with Theorem 1.13 for which \( {B}_{j} = - {B}_{j} \) and \( {B}_{j + 1} + {B}_{j + 1} \subset {B}_{j} \), all \( {B}_{j} \) being closed, and \( {B}_{1} \subset U \) .
Suppose \( y \in T{\left( {B}_{2}\right) }^{ - } \) . Then since \( T{\left( {B}_{3}\right) }^{ - } \) is a neighborhood of 0,
\[
y \in T{\left( {B}_{2}\right) }^{ - } \subset T\left( {B}_{2}\right) + T{\left( {B}_{3}\right) }^{ - },
\]
so \( y = T\left( {x}_{2}\right) + {y}_{3},{x}_{2} \in {B}_{2} \) and \( {y}_{3} \in T{\left( {B}_{3}\right) }^{ - } \) . Since \( T{\left( {B}_{4}\right) }^{ - } \) is a neighborhood
of 0,
\[
{y}_{3} \in T{\left( {B}_{3}\right) }^{ - } \subset T\left( {B}_{3}\right) + T{\left( {B}_{4}\right) }^{ - }\text{, so }
\]
\[
{y}_{3} = T\left( {x}_{3}\right) + {y}_{4};{x}_{3} \in {B}_{3}\text{ and }{y}_{4} \in T{\left( {B}_{4}\right) }^{ - }.
\]
Recursively:
\[
{y}_{n} \in T{\left( {B}_{n}\right) }^{ - } \subset T\left( {B}_{n}\right) + T{\left( {B}_{n + 1}\right) }^{ - }\text{, so }
\]
\[
{y}_{n} = T\left( {x}_{n}\right) + {y}_{n + 1},{x}_{n} \in {B}_{n}\text{ and }{y}_{n + 1} \in T{\left( {B}_{n + 1}\right) }^{ - }.
\]
We now have that \( x = \sum {x}_{j} \) converges, thanks to Theorem 1.35 . Now, as a result of the observation in the paragraph preceding Theorem \( {1.35},{B}_{2} + {B}_{3} + \cdots + {B}_{n} \subset \) \( {B}_{1} \), that is all partial sums of \( \sum {x}_{j} \) lie in \( {B}_{1} \), so \( x \in {B}_{1} \) since \( {B}_{1} \) is closed. There is more:
\[
y = T\left( {x}_{2}\right) + {y}_{3} = T\left( {x}_{2}\right) + T\left( {x}_{3}\right) + {y}_{4}
\]
\[
= T\left( {x}_{2}\right) + T\left( {x}_{3}\right) + T\left( {x}_{4}\right) + {y}_{5}
\]
\[
= \cdots = \left( {\mathop{\sum }\limits_{{j = 2}}^{n}T\left( {x}_{j}\right) }\right) + {y}_{n + 1} = T\left( {\mathop{\sum }\limits_{{j = 2}}^{n}{x}_{j}}\right) + {y}_{n + 1}
\]
So:
\[
{y}_{n + 1} = y - T\left( {\mathop{\sum }\limits_{{j = 2}}^{n}{x}_{j}}\right) \rightarrow y - T\left( x\right)
\]
since \( T \) is continuous. Note that \( {y}_{n} \in T{\left( {B}_{n}\right) }^{ - } \), and
\[
k \geq n \Rightarrow {B}_{k} \subset {B}_{n} \Rightarrow {y}_{k} \in T{\left( {B}_{k}\right) }^{ - } \subset T{\left( {B}_{n}\right) }^{ - },
\]
so \( y - T\left( x\right) \in T{\left( {B}_{n}\right) }^{ - } \) for all \( n \) . Now if \( V \) is any closed neighborhood of 0 in \( Y \), then \( {B}_{n} \subset {T}^{-1}\left( V\right) \) for some \( n \) since \( T \) is continuous, so \( T\left( {B}_{n}\right) \subset V \) and \( y - T\left( x\right) \in T{\left( {B}_{n}\right) }^{ - } \subset V \) since \( V \) is closed. Hence \( y - T\left( x\right) = 0 \) since \( Y \) is Hausdorff. We now have that \( y = T\left( x\right) \in T\left( {B}_{1}\right) \subset T\left( U\right) \) . Since \( y \) was arbitrary in \( T{\left( {B}_{2}\right) }^{ - } : T{\left( {B}_{2}\right) }^{ - } \subset T\left( U\right) \), so \( T\left( U\right) \) is a neighborhood of 0 in \( Y \) . Hence \( T \) is open by Proposition 1.26(b). Finally, \( T \) is now onto, since \( T\left( X\right) \) must be an open subspace of \( Y \), that is \( T\left( X\right) = Y \) .
Remark. The above proof that "nearly open \( \Rightarrow \) open" actually works for topological groups. The hypotheses required are that \( X \) and \( Y \) be Hausdorff topological groups, with \( X \) being first countable and complete; and \( T : X \rightarrow Y \) be a continuous, nearly open homomorphism. The conclusion is that \( T \) is an open map, and that \( T\left( X\right) \) is then an open subgroup of \( Y \) . The proof is almost identical: All you have to do is replace each plus sign with a multiplication symbol (e.g.," \( y = T\left( {x}_{2}\right) + {y}_{3} \) " becomes " \( y = T\left( {x}_{2}\right) \cdot {y}_{3} \), and " \( x = \sum {x}_{j} \) " becomes " \( x = \prod {x}_{j} \) "); after reversing the order of \( y - T\left( x\right) \) [and \( y - T\left( {\sum {x}_{j}}\right) \) ], which, for example, becomes \( - T\left( x\right) + y \) and then \( T{\left( x\right) }^{-1} \cdot y \) . That’s it. Something similar happens with the closed graph theorem.
By the way, a typical application of the "nearly open \( \Rightarrow \) onto" part will appear in Chap. 5.
Corollary 4.36. Suppose \( X \) is a Fréchet space, and \( Y \) is a barreled, Hausdorff, locally convex space. Suppose \( T : X \rightarrow Y \) is a continuous linear map from \( X \) onto \( Y \) . Then \( T \) induces an isomorphism of the Fréchet space \( X/\ker \left( T\right) \) with \( Y \) .
Proof. The induced map is continuous and open by Theorem 1.23(c) and (e), and is an algebraic isomorphism for the usual algebraic reasons. \( X/\ker \left( T\right) \) is Hausdorff since \( \ker T = {T}^{-1}\left( {\{ 0\} }\right) \) is closed [Theorem \( {1.23}\left( \mathrm{\;g}\right) \) ], while \( X/\ker \left( T\right) \) is first countable by Theorem 1.23(f) and complete by Corollary 1.36. Hence \( X/\ker \left( T\right) \) is a Fréchet space (Corollary 3.36).
Now for the closed graph theorem.
## 4.5 The Closed Graph Theorem
For Banach spaces, it is traditional to prove the open mapping theorem, and then derive the closed graph theorem as a corollary. There is a reason for that, even though the open mapping theorem can be just as easily derived from the closed graph theorem. A direct proof of the open mapping theorem involves two steps; these are parts (a) and (b) of Theorem 4.35 in the last section. [In our context, part (a) was trivial only because Theorem 4.5 was available.] A direct proof of the closed graph theorem, however, involves three steps. Here, we have to go through that because the two results are largely independent. True, one can derive Corollary 4.36 from the closed graph theorem as it appears here, but Theorem 4.35b) does not directly follow from this. Also, the closed graph theorem cannot be derived directly from the open mapping theorem due to the asymmetry in the conditions on the spaces: The graph does not inherit any nice properties.
There is a version of the closed graph theorem that can be used to directly prove the open mapping theorem; it is rather messy, and is given in Appendix B. (It applies to topological groups.)
Theorem 4.37 (Closed Graph Theorem). Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and suppose \( X \) is barreled and \( Y \) is a Fréchet space. Suppose \( T : X \rightarrow Y \) is a linear transformation with a graph, \( \Gamma \left( T\right) \), that is closed in \( X \times Y \) . Then \( T \) is continuous.
Proof. Let \( U \) be a convex, balanced neighborhood of 0 in \( Y \) . In view of Proposition 1.26(a), it suffices to prove that for any such \( U,{T}^{-1}\left( U\right) \) is a neighborhood of 0 . As in the proof of the open mapping theorem, choose a base at 0 for \( Y : {\mathcal{B}}_{0} = \left\{ {{B}_{1},{B}_{2},\ldots }\right\} \), with \( {B}_{j} = - {B}_{j},{B}_{j + 1} + {B}_{j + 1} \subset {B}_{j} \), all \( {B}_{j} \) closed, and \( {B}_{1} \subset U \) . Given any \( {B}_{j} \), there exists a convex, balanced neighborhood \( W \) of 0 such that \( W \subset {B}_{j} \) . Now given any \( x, W \) absorbs \( T\left( x\right) \), so \( {T}^{-1}\left( W\right) \) absorbs \( x \) . That is, \( {T}^{-1}\left( W\right) \) is convex, balanced, and absorbent, so \( {T}^{-1}{\left( W\right) }^{ - } \) is a barrel, and so is a neighborhood of 0 in \( X \) . Since \( W \subset {B}_{j} : {T}^{-1}\left( W\right) \subset {T}^{-1}\left( {B}_{j}\right) \), so \( {T}^{-1}{\left( W\right) }^{ - } \subset {T}^{-1}{\left( {B}_{j}\right) }^{ - } \) . That is, every \( {T}^{-1}{\left( {B}_{j}\right) }^{ - } \) is a neighborhood of 0 in \( X \) . This is the first step in the proof.
For the second step, suppose \( x \in {T}^{-1}{\left( {B}_{2}\right) }^{ - } \) . We will eventually show that \( T\left( x\right) \in U \), so that \( x \in {T}^{-1}\left( U\right) \), giving \( {T}^{-1}{\left( {B}_{2}\right) }^{ - } \subset {T}^{-1}\left( U\right) \) by letting \( x \) vary. This will complete the proof, but there are two distinct parts to this. The current one produces a candidate \( y \in {B}_{1} \subset U \) for which (eventually) \( T\left( x\right) \) will equal \( y \), and the last step will establish that \( T\left( x\right) = y \)
|
Corollary 4.34. Suppose \( X \) and \( Y \) are Hausdorff locally convex spaces, and suppose \( Y \) is barreled. Then any linear map \( T \) from \( X \) onto \( Y \) is nearly open.
|
Use \( {\mathcal{B}}_{0} = \) all convex, balanced neighborhoods of 0 in \( X \). If \( B \in {\mathcal{B}}_{0} \), and \( x \in X \), then \( x \in {cB} \Rightarrow T\left( x\right) \in T\left( {cB}\right) = {cT}\left( B\right) \), so \( T\left( B\right) \) is convex, balanced, and absorbent (since \( T \) is onto). Hence \( T{\left( B\right) }^{ - } \) is closed, convex (Proposition 2.13), balanced (Proposition 2.5), and absorbent, that is \( T{\left( B\right) }^{ - } \) is a barrel in \( Y \). Since \( Y \) is assumed to be barreled, \( T{\left( B\right) }^{ - } \) is a neighborhood of 0. Hence \( T \) is nearly open by Corollary 4.33.
|
Lemma 12.17. If \( \left( {K,\mu ;\varphi }\right) \) is a faithful topological measure-preserving system, then \( \varphi \left( K\right) = K \), i.e., \( \left( {K;\varphi }\right) \) is a surjective topological system.
Theorem 10.2 of Krylov and Bogoljubov tells that every topological system \( \left( {K;\varphi }\right) \) has at least one invariant probability measure, and hence gives rise to at least one topological measure-preserving system. (By the lemma above, this topological measure-preserving system cannot be faithful if \( \left( {K;\varphi }\right) \) is not a surjective system. But even if the topological system is surjective and uniquely ergodic, the arising measure-preserving system need not be faithful as Exercise 9 shows.) Conversely, one may ask:
Is every measure-preserving system (algebra) isomorphic to a topological one?
Before we answer this question in the affirmative, it is convenient to pass to a larger category (see also the discussion at the end of this section).
Definition 12.18. An abstract measure-preserving system is a pair \( \left( {\mathrm{X};T}\right) \), where \( \mathrm{X} \) is a probability space and \( T : {\mathrm{L}}^{1}\left( \mathrm{X}\right) \rightarrow {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) is a Markov embedding.
For simplicity, an abstract measure-preserving system is also called just an abstract system. A homomorphism
\[
S : \left( {{\mathrm{X}}_{1};{T}_{1}}\right) \rightarrow \left( {{\mathrm{X}}_{2};{T}_{2}}\right)
\]
of abstract systems \( \left( {{\mathrm{X}}_{1};{T}_{1}}\right) ,\left( {{\mathrm{X}}_{2};{T}_{2}}\right) \) is a Markov embedding \( S : {\mathrm{L}}^{1}\left( {\mathrm{X}}_{1}\right) \rightarrow \) \( {\mathrm{L}}^{1}\left( {\mathrm{X}}_{2}\right) \) that intertwines the operators \( {T}_{1},{T}_{2} \), i.e., such that \( {T}_{2}S = S{T}_{1} \) . In this case \( \left( {{\mathrm{X}}_{2};{T}_{2}}\right) \) is called an extension of \( \left( {{\mathrm{X}}_{1};{T}_{1}}\right) \) and \( \left( {{\mathrm{X}}_{1};{T}_{1}}\right) \) is called a factor of \( \left( {{\mathrm{X}}_{2};{T}_{2}}\right) \) . (This is coherent with the terminology on page 233.) A surjective (= bijective) homomorphism \( S \) is an isomorphism. In this case its inverse \( {S}^{-1} \) is also a homomorphism. Finally, an abstract system \( \left( {\mathrm{X};T}\right) \) is invertible if \( T \) is invertible, and it is called ergodic if \( \operatorname{fix}\left( T\right) = \mathbb{C}\mathbf{1} \) .
Given two abstract systems \( \left( {{\mathrm{X}}_{1};{T}_{1}}\right) \) and \( \left( {{\mathrm{X}}_{2};{T}_{2}}\right) \) one can form their product system \( \left( {{\mathrm{X}}_{1} \otimes {\mathrm{X}}_{2};{T}_{1} \otimes {T}_{2}}\right) \), see Exercise 16. An abstract system \( \left( {\mathrm{X};T}\right) \) is called weakly mixing if the product system \( \left( {\mathrm{X} \times \mathrm{X};T \otimes T}\right) \) is ergodic.
Example 12.19. Each measure-preserving system \( \left( {\mathrm{X};\varphi }\right) \) gives rise to an abstract system \( \left( {\mathrm{X};T}\right) \) where \( T \mathrel{\text{:=}} {T}_{\varphi } \) is the Koopman operator. According to Proposition 7.12, the system \( \left( {\mathrm{X};\varphi }\right) \) is invertible if and only if its abstract counterpart \( \left( {\mathrm{X};{T}_{\varphi }}\right) \) is invertible. Moreover, by Corollary 12.12 above, two measure-preserving systems \( \left( {\mathrm{X};\varphi }\right) \) and \( \left( {\mathrm{Y};\psi }\right) \) are algebra isomorphic if and only if the associated abstract systems \( \left( {\mathrm{X};{T}_{\varphi }}\right) \) and \( \left( {\mathrm{Y};{T}_{\psi }}\right) \) are isomorphic in the sense noted above. The Koopman operator \( {T}_{\theta } \) of a point factor map \( \theta : \left( {\mathrm{X};\varphi }\right) \rightarrow \left( {\mathrm{Y};\psi }\right) \) (Definition 12.1) is a homomorphism \( {T}_{\theta } : \left( {\mathrm{Y};{T}_{\psi }}\right) \rightarrow \left( {\mathrm{X};{T}_{\varphi }}\right) \) of abstract systems, hence yields a factor.
A (faithful) topological model of an abstract measure-preserving system \( \left( {\mathrm{X};T}\right) \) is any (faithful) topological measure-preserving system \( \left( {K,\mu ;\psi }\right) \) together with an isomorphism
\[
\Phi : \left( {K,\mu ;{T}_{\psi }}\right) \rightarrow \left( {\mathrm{X};T}\right)
\]
of abstract measure-preserving systems. In the following we shall show that every abstract system has (usually many) faithful topological models.
Suppose that \( \left( {\mathrm{X};T}\right) \) is an abstract measure-preserving system and let \( A \subseteq {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) be a \( {C}^{ * } \) -subalgebra. (Recall that this means that \( A \) is a norm-closed and conjugation invariant subalgebra with \( \mathbf{1} \in A \) .) By the Gelfand-Naimark Theorem 4.23, there is a compact space \( K \) and a (unital) \( {C}^{ * } \) -algebra isomorphism \( \Phi : \mathrm{C}\left( K\right) \rightarrow A \) . The Riesz representation theorem yields a unique probability measure \( \mu \in {\mathrm{M}}^{1}\left( K\right) \) such that
\[
{\int }_{K}f\mathrm{\;d}\mu = {\int }_{\mathrm{X}}{\Phi f}\;\left( {f \in \mathrm{C}\left( K\right) }\right) .
\]
(12.3)
(Note that the measure \( \mu \) has full support.) By Theorem 7.23 one has in addition
\[
\left| {\Phi f}\right| = \Phi \left| f\right| \;\left( {f \in \mathrm{C}\left( K\right) }\right) ,
\]
(12.4)
and this yields
\[
\parallel {\Phi f}{\parallel }_{{\mathrm{L}}^{1}\left( \mathrm{X}\right) } = {\int }_{\mathrm{X}}\Phi \left| f\right| = {\int }_{K}\left| f\right| \mathrm{d}\mu = \parallel f{\parallel }_{{\mathrm{L}}^{1}\left( {K,\mu }\right) }
\]
for every \( f \in \mathrm{C}\left( K\right) \), i.e., \( \Phi \) is an \( {\mathrm{L}}^{1} \) -isometry. Consequently, \( \Phi \) extends uniquely to an isometric embedding
\[
\Phi : {\mathrm{L}}^{1}\left( {K,\mu }\right) \rightarrow {\mathrm{L}}^{1}\left( \mathrm{X}\right)
\]
with range \( \operatorname{ran}\left( \Phi \right) = {\operatorname{cl}}_{{\mathrm{L}}^{1}}\left( A\right) \), the \( {\mathrm{L}}^{1} \) -closure of \( A \) . Moreover, it follows from (12.3) and (12.4) by approximation that \( \Phi \) is a Markov embedding.
Now, suppose in addition that \( A \) is \( T \) -invariant, i.e., \( T\left( A\right) \subseteq A \) . Then
\[
{\Phi }^{-1}{T\Phi } : \mathrm{C}\left( K\right) \rightarrow \mathrm{C}\left( K\right)
\]
is an algebra homomorphism, again by Theorem 7.23. Hence, by Theorem 4.13 there is a unique continuous map \( \psi : K \rightarrow K \) such that \( {\Phi }^{-1}{T\Phi } = {T}_{\psi } \) . Moreover, the measure \( \mu \) is \( \psi \) -invariant since
\[
{\int }_{K}f \circ \psi \mathrm{d}\mu = {\int }_{K}{\Phi }^{-1}{T\Phi f}\mathrm{\;d}\mu = {\int }_{\mathrm{X}}{T\Phi f} = {\int }_{\mathrm{X}}{\Phi f} = {\int }_{K}f\mathrm{\;d}\mu
\]
for every \( f \in \mathrm{C}\left( K\right) \) . It follows that \( \left( {K,\mu ;\psi }\right) \) is a faithful topological measure-preserving system, and that \( \Phi : {\mathrm{L}}^{1}\left( {K,\mu }\right) \rightarrow {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) is a Markov embedding that intertwines \( {T}_{\psi } \) and \( T \), i.e., a homomorphism of the dynamical systems.

We have proved the nontrivial part of the following theorem. (The remaining part is left as Exercise 10.)
Theorem 12.20. Let \( \left( {\mathrm{X};T}\right) \) be an abstract measure-preserving system. Then \( A \subseteq \) \( {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) is a \( T \) -invariant \( {C}^{ * } \) -subalgebra if and only if there exists a faithful topological measure-preserving system \( \left( {K,\mu ;\psi }\right) \) and a Markov embedding \( \Phi \) : \( {\mathrm{L}}^{1}\left( {K,\mu }\right) \rightarrow {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) with \( {T\Phi } = \Phi {T}_{\psi } \) and such that \( A = \Phi \left( {\mathrm{C}\left( K\right) }\right) \) .
Let us call a subalgebra \( A \) of \( {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) full if \( {\mathrm{{cl}}}_{{\mathrm{L}}^{1}}A = {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) . If \( A \) is full, then the Markov embedding \( \Phi \) in Theorem 12.20 is surjective, hence
\[
\Phi : \left( {K,\mu ;{T}_{\psi }}\right) \rightarrow \left( {\mathrm{X};T}\right)
\]
is an isomorphism of abstract dynamical systems, i.e., a (faithful) topological model of \( \left( {\mathrm{X};T}\right) \) .
Corollary 12.21. Every abstract measure-preserving system has a faithful topological model. In particular, every measure-preserving system is (algebra) isomorphic to a topological measure-preserving system.
Note that in the construction above we can choose an arbitrary full subalgebra, hence uniqueness of a model cannot be expected. For the choice \( A \mathrel{\text{:=}} {\mathrm{L}}^{\infty }\left( \mathrm{X}\right) \) we obtain a distinguished model, to be studied in more detail in Section 12.4 below. However, other models may be of interest, as in the following result.
Theorem 12.22 (Metric Models). An abstract measure-preserving system (X; T) has a metric model if and only if \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) is a separable Banach space.
Proof. Let \( \left( {K,\mu ;\psi }\right) \) be a metric model for \( \left( {\mathrm{X};T}\right) \) . By Theorem 4.7, \( \mathrm{C}\left( K\right) \) is a separable Banach space, and as any dense subset of \( \mathrm{C}\left( K\right) \) is also dense in \( {\mathrm{L}}^{1}\left( {K,\mu }\right) \) , the latter space must be separable as well.
Conversely, suppose that \( {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) is separable, and let \( M \subseteq {\mathrm{L}}^{1}\left( \mathrm{X}\right) \) be a countable dense set. Since \( {\mathrm{L}}^{\infty } \) is dense in \( {\mathrm{L}}^{1} \), we can approx
|
Lemma 12.17. If \( \left( {K,\mu ;\varphi }\right) \) is a faithful topological measure-preserving system, then \( \varphi \left( K\right) = K \), i.e., \( \left( {K;\varphi }\right) \) is a surjective topological system.
|
null
|
Proposition 16.45 (The Riemannian Density). Let \( \left( {M, g}\right) \) be a Riemannian manifold with or without boundary. There is a unique smooth positive density \( {\mu }_{g} \) on \( M \) , called the Riemannian density, with the property that
\[
{\mu }_{g}\left( {{E}_{1},\ldots ,{E}_{n}}\right) = 1
\]
(16.20)
for any local orthonormal frame \( \left( {E}_{i}\right) \) .
Proof. Uniqueness is immediate, because any two densities that agree on a basis must be equal. Given any point \( p \in M \), let \( U \) be a connected smooth coordinate neighborhood of \( p \) . Since \( U \) is diffeomorphic to an open subset of Euclidean space, it is orientable. Any choice of orientation of \( U \) uniquely determines a Riemannian volume form \( {\omega }_{g} \) on \( U \), with the property that \( {\omega }_{g}\left( {{E}_{1},\ldots ,{E}_{n}}\right) = 1 \) for any oriented orthonormal frame. If we put \( {\mu }_{g} = \left| {\omega }_{g}\right| \), it follows easily that \( {\mu }_{g} \) is a smooth positive density on \( U \) satisfying (16.20). If \( U \) and \( V \) are two overlapping smooth coordinate neighborhoods, the two definitions of \( {\mu }_{g} \) agree where they overlap by uniqueness, so this defines \( {\mu }_{g} \) globally.
- Exercise 16.46. Let \( \left( {M, g}\right) \) be an oriented Riemannian manifold with or without boundary and let \( {\omega }_{g} \) be its Riemannian volume form.
(a) Show that the Riemannian density of \( M \) is given by \( {\mu }_{g} = \left| {\omega }_{g}\right| \) .
(b) For any compactly supported continuous function \( f : M \rightarrow \mathbb{R} \), show that
\[
{\int }_{M}f{\mu }_{g} = {\int }_{M}f{\omega }_{g}
\]
- Exercise 16.47. Suppose \( \left( {M, g}\right) \) and \( \left( {\widetilde{M},\widetilde{g}}\right) \) are Riemannian manifolds with or without boundary, and \( F : M \rightarrow \widetilde{M} \) is a local isometry. Show that \( {F}^{ * }{\mu }_{\widetilde{g}} = {\mu }_{g} \) .
Because of Exercise 16.46(b), it is customary to denote the Riemannian density simply by \( d{V}_{g} \), and to specify when necessary whether the notation refers to a density or a form. If \( f : M \rightarrow \mathbb{R} \) is a compactly supported continuous function, the integral of \( f \) over \( M \) is defined to be \( {\int }_{M}{fd}{V}_{g} \) . Exercise 16.46 shows that when \( M \) is oriented, it does not matter whether we interpret \( d{V}_{g} \) as the Riemannian volume form or the Riemannian density. (If the orientation of \( M \) is changed, then both the integral and \( d{V}_{g} \) change signs, so the result is the same.) When \( M \) is not orientable, however, we have no choice but to interpret it as a density.
One of the most useful applications of densities is that they enable us to generalize the divergence theorem to nonorientable manifolds. If \( X \) is a smooth vector field on \( M \), Exercise 16.31 shows that the divergence of \( X \) can be defined even when \( M \) is not orientable. The next theorem shows that the divergence theorem holds in that case as well.
Theorem 16.48 (The Divergence Theorem in the Nonorientable Case). Suppose \( \left( {M, g}\right) \) is a nonorientable Riemannian manifold with boundary. For any compactly supported smooth vector field \( X \) on \( M \) ,
\[
{\int }_{M}\left( {\operatorname{div}X}\right) {\mu }_{g} = {\int }_{\partial M}\langle X, N{\rangle }_{g}{\mu }_{\widetilde{g}}
\]
(16.21)
where \( N \) is the outward-pointing unit normal vector field along \( \partial M,\widetilde{g} \) is the induced Riemannian metric on \( \partial M \), and \( {\mu }_{g},{\mu }_{\widetilde{g}} \) are the Riemannian densities of \( g \) and \( \widetilde{g} \) , respectively.
Proof. Let \( \widehat{\pi } : \widehat{M} \rightarrow M \) be the orientation covering of \( M \) . Problem 5-12 shows that \( \widehat{\pi } \) restricts to a smooth covering map from each component of \( \partial \widehat{M} \) to a component of \( \partial M \), so in the terminology of Chapter \( {15},\widehat{\pi } : \partial \widehat{M} \rightarrow \partial M \) is a generalized covering map.
Define metrics \( \widehat{g} = {\widehat{\pi }}^{ * }g \) on \( \widehat{M} \) and \( \bar{g} = {\widehat{\pi }}^{ * }\widetilde{g} \) on \( \partial \widehat{M} \) . Denote the Riemannian volume forms of \( \widehat{g} \) and \( \bar{g} \) by \( {\omega }_{\widehat{g}} \) and \( {\omega }_{\bar{g}} \), respectively, and their Riemannian densities by \( {\mu }_{\widehat{g}} \) and \( {\mu }_{\bar{g}} \) . Because \( \widehat{\pi } \) is a local isometry, it is easy to check that the outward unit normal \( \widehat{N} \) along \( \partial \widehat{M} \) is \( \widehat{\pi } \) -related to \( N \) . Moreover, it follows from Problem 8-18(a) that there is a unique smooth vector field \( \widehat{X} \) on \( \widehat{M} \) that is \( \widehat{\pi } \) -related to \( X \) .
Since \( \widehat{M} \) is an oriented smooth Riemannian manifold with boundary, we can apply the usual divergence theorem to it to obtain
\[
2{\int }_{M}\left( {\operatorname{div}X}\right) {\mu }_{g} = {\int }_{\widehat{M}}{\widehat{\pi }}^{ * }\left( {\left( {\operatorname{div}X}\right) {\mu }_{g}}\right) \;\text{ (by Problem 16-3) }
\]
\[
= {\int }_{\widehat{M}}\left( {\operatorname{div}\widehat{X}}\right) {\mu }_{\widehat{g}}\;\left( {\widehat{\pi }\text{ is a local isometry }}\right)
\]
\[
= {\int }_{\widehat{M}}\left( {\operatorname{div}\widehat{X}}\right) {\omega }_{\widehat{g}}\;\text{(by Exercise 16.46(b))}
\]
\[
= {\int }_{\partial \widehat{M}}\langle \widehat{X},\widehat{N}{\rangle }_{\widehat{g}}{\omega }_{\bar{g}}\;\text{(divergence theorem on}\widehat{M}\text{)}
\]
\[
= {\int }_{\partial \widehat{M}}\langle \widehat{X},\widehat{N}{\rangle }_{\widehat{g}}{\mu }_{\bar{g}}\;\text{(by Exercise 16.46(b))}
\]
\[
= {\int }_{\partial \widehat{M}}{\left( {\left. \widehat{\pi }\right| }_{\partial \widehat{M}}\right) }^{ * }\left( {\langle X, N{\rangle }_{g}{\mu }_{\widetilde{g}}}\right) \;\left( {{\left. \widehat{\pi }\right| }_{\partial \widehat{M}}\text{ is a local isometry}}\right)
\]
\[
= 2{\int }_{\partial M}\langle X, N{\rangle }_{g}{\mu }_{\widetilde{g}}\;\text{ (by Problem 16-3). }
\]
Dividing both sides by 2 yields (16.21).
## Problems
16-1. Let \( {v}_{1},\ldots ,{v}_{n} \) be any \( n \) linearly independent vectors in \( {\mathbb{R}}^{n} \), and let \( P \) be the \( n \) -dimensional parallelepiped they span:
\[
P = \left\{ {{t}_{1}{v}_{1} + \cdots + {t}_{n}{v}_{n} : 0 \leq {t}_{i} \leq 1}\right\} .
\]
Show that \( \operatorname{Vol}\left( P\right) = \left| {\det \left( {{v}_{1},\ldots ,{v}_{n}}\right) }\right| \) . (Used on p. 401.)
16-2. Let \( {\mathbb{T}}^{2} = {\mathbb{S}}^{1} \times {\mathbb{S}}^{1} \subseteq {\mathbb{R}}^{4} \) denote the 2-torus, defined as the set of points \( \left( {w, x, y, z}\right) \) such that \( {w}^{2} + {x}^{2} = {y}^{2} + {z}^{2} = 1 \), with the product orientation determined by the standard orientation on \( {\mathbb{S}}^{1} \) . Compute \( {\int }_{{\mathbb{T}}^{2}}\omega \), where \( \omega \) is the following 2-form on \( {\mathbb{R}}^{4} \) :
\[
\omega = {xyzdw} \land {dy}.
\]
16-3. Suppose \( E \) and \( M \) are smooth \( n \) -manifolds with or without boundary, and \( \pi : E \rightarrow M \) is a smooth \( k \) -sheeted covering map or generalized covering map.
(a) Show that if \( E \) and \( M \) are oriented and \( \pi \) is orientation-preserving, then \( {\int }_{E}{\pi }^{ * }\omega = k{\int }_{M}\omega \) for any compactly supported \( n \) -form \( \omega \) on \( M \) .
(b) Show that \( {\int }_{E}{\pi }^{ * }\mu = k{\int }_{M}\mu \) whenever \( \mu \) is a compactly supported density on \( M \) .
16-4. Suppose \( M \) is an oriented compact smooth manifold with boundary. Show that there does not exist a retraction of \( M \) onto its boundary. [Hint: if the retraction is smooth, consider an orientation form on \( \partial M \) .]
16-5. Suppose \( M \) and \( N \) are oriented, compact, connected, smooth manifolds, and \( F, G : M \rightarrow N \) are homotopic diffeomorphisms. Show that \( F \) and \( G \) are either both orientation-preserving or both orientation-reversing. [Hint: use Theorem 6.29 and Stokes’s theorem on \( M \times I \) .]
16-6. THE HAIRY BALL THEOREM: There exists a nowhere-vanishing vector field on \( {\mathbb{S}}^{n} \) if and only if \( n \) is odd. ("You cannot comb the hair on a ball.") Prove this by showing that the following are equivalent:
(a) There exists a nowhere-vanishing vector field on \( {\mathbb{S}}^{n} \) .
(b) There exists a continuous map \( V : {\mathbb{S}}^{n} \rightarrow {\mathbb{S}}^{n} \) satisfying \( V\left( x\right) \bot x \) (with respect to the Euclidean dot product on \( {\mathbb{R}}^{n + 1} \) ) for all \( x \in {\mathbb{S}}^{n} \) .
(c) The antipodal map \( \alpha : {\mathbb{S}}^{n} \rightarrow {\mathbb{S}}^{n} \) is homotopic to \( {\operatorname{Id}}_{{\mathbb{S}}^{n}} \) .
(d) The antipodal map \( \alpha : {\mathbb{S}}^{n} \rightarrow {\mathbb{S}}^{n} \) is orientation-preserving.
(e) \( n \) is odd.
[Hint: use Problems 9-4, 15-3, and 16-5.]
16-7. Show that any finite product \( {M}_{1} \times \cdots \times {M}_{k} \) of smooth manifolds with corners is again a smooth manifold with corners. Give a counterexample to show that a finite product of smooth manifolds with boundary need not be a smooth manifold with boundary.
16-8. Suppose \( M \) is a smooth manifold with corners, and let \( \mathcal{C} \) denote the set of corner points of \( M \) . Show that \( M \smallsetminus \mathcal{C} \) is a smooth manifold with boundary.
16-9. Let \( \omega \) be the \( \left( {n - 1}\right) \) -form on \( {\mathbb{R}}^{n} \smallsetminus \{ 0\} \) defined by
\[
\omega = {\left| x\right| }^{-n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i - 1}{x}^{i}d{x}^{1} \land \cdots \land \widehat{d{x}^{i}} \land \cdots \land d{x}^{n
|
Proposition 16.45 (The Riemannian Density). Let \( \left( {M, g}\right) \) be a Riemannian manifold with or without boundary. There is a unique smooth positive density \( {\mu }_{g} \) on \( M \) , called the Riemannian density, with the property that
\[
{\mu }_{g}\left( {{E}_{1},\ldots ,{E}_{n}}\right) = 1
\]
for any local orthonormal frame \( \left( {E}_{i}\right) \).
|
Uniqueness is immediate, because any two densities that agree on a basis must be equal. Given any point \( p \in M \), let \( U \) be a connected smooth coordinate neighborhood of \( p \). Since \( U \) is diffeomorphic to an open subset of Euclidean space, it is orientable. Any choice of orientation of \( U \) uniquely determines a Riemannian volume form \( {\omega }_{g} \) on \( U \), with the property that \( {\omega }_{g}\left( {{E}_{1},\ldots ,{E}_{n}}\right) = 1 \) for any oriented orthonormal frame. If we put \( {\mu }_{g} = \left| {\omega }_{g}\right| \), it follows easily that \( {\mu }_{g} \) is a smooth positive density on \( U \) satisfying (16.20). If \( U \) and \( V \) are two overlapping smooth coordinate neighborhoods, the two definitions of \( {\mu }_{g} \) agree where they overlap by uniqueness, so this defines \( {\mu }_{g} \) globally.
|
Exercise 1.3.3 Assuming the \( {ABC} \) Conjecture, show that there are infinitely many primes \( p \) such that \( {2}^{p - 1} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) .
Exercise 1.3.4 Show that the number of primes \( p \leq x \) for which
\[
{2}^{p - 1} ≢ 1\;\left( {\;\operatorname{mod}\;{p}^{2}}\right)
\]
is \( \gg \log x/\log \log x \), assuming the \( {ABC} \) Conjecture.
In 1909, Wieferich proved that if \( p \) is a prime satisfying
\[
{2}^{p - 1} ≢ 1\;\left( {\;\operatorname{mod}\;{p}^{2}}\right)
\]
then the equation \( {x}^{p} + {y}^{p} = {z}^{p} \) has no nontrivial integral solutions satisfying \( p \nmid {xyz} \) . It is still unknown without assuming \( {ABC} \) if there are infinitely many primes \( p \) such that \( {2}^{p - 1} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) . (See also Exercise 9.2.15.)
A natural number \( n \) is called squarefull (or powerfull) if for every prime \( p \mid n \) we have \( {p}^{2} \mid n \) . In 1976 Erdös [Er] conjectured that we cannot have three consecutive squarefull natural numbers.
Exercise 1.3.5 Show that if the Erdös conjecture above is true, then there are infinitely many primes \( p \) such that \( {2}^{p - 1} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \) .
Exercise 1.3.6 Assuming the \( {ABC} \) Conjecture, prove that there are only finitely many \( n \) such that \( n - 1, n, n + 1 \) are squarefull.
Exercise 1.3.7 Suppose that \( a \) and \( b \) are odd positive integers satisfying
\[
\operatorname{rad}\left( {{a}^{n} - 2}\right) = \operatorname{rad}\left( {{b}^{n} - 2}\right)
\]
for every natural number \( n \) . Assuming \( {ABC} \), prove that \( a = b \) . (This problem is due to \( \mathrm{H} \) . Kisilevsky.)
## 1.4 Supplementary Problems
Exercise 1.4.1 Show that every proper ideal of \( \mathbb{Z} \) is of the form \( n\mathbb{Z} \) for some integer \( n \) .
Exercise 1.4.2 An ideal \( I \) is called prime if \( {ab} \in I \) implies \( a \in I \) or \( b \in I \) . Prove that every prime ideal of \( \mathbb{Z} \) is of the form \( p\mathbb{Z} \) for some prime integer \( p \) .
Exercise 1.4.3 Prove that if the number of prime Fermat numbers is finite, then the number of primes of the form \( {2}^{n} + 1 \) is finite.
Exercise 1.4.4 If \( n > 1 \) and \( {a}^{n} - 1 \) is prime, prove that \( a = 2 \) and \( n \) is prime.
Exercise 1.4.5 An integer is called perfect if it is the sum of its divisors. Show that if \( {2}^{n} - 1 \) is prime, then \( {2}^{n - 1}\left( {{2}^{n} - 1}\right) \) is perfect.
Exercise 1.4.6 Prove that if \( p \) is an odd prime, any prime divisor of \( {2}^{p} - 1 \) is of the form \( {2kp} + 1 \), with \( k \) a positive integer.
Exercise 1.4.7 Show that there are no integer solutions to the equation \( {x}^{4} - {y}^{4} = \) \( 2{z}^{2} \) .
Exercise 1.4.8 Let \( p \) be an odd prime number. Show that the numerator of
\[
1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p - 1}
\]
is divisible by \( p \) .
Exercise 1.4.9 Let \( p \) be an odd prime number greater than 3 . Show that the
numerator of
\[
1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p - 1}
\]
is divisible by \( {p}^{2} \) .
Exercise 1.4.10 (Wilson’s Theorem) Show that \( n > 1 \) is prime if and only if \( n \) divides \( \left( {n - 1}\right) ! + 1 \) .
Exercise 1.4.11 For each \( n > 1 \), let \( Q \) be the product of all numbers \( a < n \) which are coprime to \( n \) . Show that \( Q \equiv \pm 1\left( {\;\operatorname{mod}\;n}\right) \) .
Exercise 1.4.12 In the previous exercise, show that \( Q \equiv 1\left( {\;\operatorname{mod}\;n}\right) \) whenever \( n \) is odd and has at least two prime factors.
Exercise 1.4.13 Use Exercises 1.2.7 and 1.2.8 to show that there are infinitely many primes \( \equiv 1\left( {\;\operatorname{mod}\;{2}^{r}}\right) \) for any given \( r \) .
Exercise 1.4.14 Suppose \( p \) is an odd prime such that \( {2p} + 1 = q \) is also prime. Show that the equation
\[
{x}^{p} + 2{y}^{p} + 5{z}^{p} = 0
\]
has no solutions in integers.
Exercise 1.4.15 If \( x \) and \( y \) are coprime integers, show that if
\[
\left( {x + y}\right) \text{ and }\frac{{x}^{p} + {y}^{p}}{x + y}
\]
have a common prime factor, it must be \( p \) .
Exercise 1.4.16 (Sophie Germain’s Trick) Let \( p \) be a prime such that \( {2p} + \) \( 1 = q > 3 \) is also prime. Show that
\[
{x}^{p} + {y}^{p} + {z}^{p} = 0
\]
has no integral solutions with \( p \nmid {xyz} \) .
Exercise 1.4.17 Assuming \( {ABC} \), show that there are only finitely many consecutive cubefull numbers.
Exercise 1.4.18 Show that
\[
\mathop{\sum }\limits_{p}\frac{1}{p} = + \infty
\]
where the summation is over prime numbers.
Exercise 1.4.19 (Bertrand’s Postulate) (a) If \( {a}_{0} \geq {a}_{1} \geq {a}_{2} \geq \cdots \) is a de-
creasing sequence of real numbers tending to 0 , show that
\[
\mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( -1\right) }^{n}{a}_{n} \leq {a}_{0} - {a}_{1} + {a}_{2}
\]
(b) Let \( T\left( x\right) = \mathop{\sum }\limits_{{n < x}}\psi \left( {x/n}\right) \), where \( \psi \left( x\right) \) is defined as in Exercise 1.1.25. Show
that
\[
T\left( x\right) = x\log x - x + O\left( {\log x}\right) .
\]
(c) Show that
\[
T\left( x\right) - {2T}\left( \frac{x}{2}\right) = \mathop{\sum }\limits_{{n \leq x}}{\left( -1\right) }^{n - 1}\psi \left( \frac{x}{n}\right) = \left( {\log 2}\right) x + O\left( {\log x}\right) .
\]
Deduce that
\[
\psi \left( x\right) - \psi \left( \frac{x}{2}\right) \geq \frac{1}{3}\left( {\log 2}\right) x + O\left( {\log x}\right) .
\]
## Chapter 2
## Euclidean Rings
## 2.1 Preliminaries
We can discuss the concept of divisibility for any commutative ring \( R \) with identity. Indeed, if \( a, b \in R \), we will write \( a \mid b \) ( \( a \) divides \( b \) ) if there exists some \( c \in R \) such that \( {ac} = b \) . Any divisor of 1 is called a unit. We will say that \( a \) and \( b \) are associates and write \( a \sim b \) if there exists a unit \( u \in R \) such that \( a = {bu} \) . It is easy to verify that \( \sim \) is an equivalence relation.
Further, if \( R \) is an integral domain and we have \( a, b \neq 0 \) with \( a \mid b \) and \( b \mid a \), then \( a \) and \( b \) must be associates, for then \( \exists c, d \in R \) such that \( {ac} = b \) and \( {bd} = a \), which implies that \( {bdc} = b \) . Since we are in an integral domain, \( {dc} = 1 \), and \( d, c \) are units.
We will say that \( a \in R \) is irreducible if for any factorization \( a = {bc} \), one of \( b \) or \( c \) is a unit.
Example 2.1.1 Let \( R \) be an integral domain. Suppose there is a map \( n : R \rightarrow \mathbb{N} \) such that:
(i) \( n\left( {ab}\right) = n\left( a\right) n\left( b\right) \forall a, b \in R \) ; and
(ii) \( n\left( a\right) = 1 \) if and only if \( a \) is a unit.
We call such a map a norm map, with \( n\left( a\right) \) the norm of \( a \) . Show that every element of \( R \) can be written as a product of irreducible elements.
Solution. Suppose \( b \) is an element of \( R \) . We proceed by induction on the norm of \( b \) . If \( b \) is irreducible, then we have nothing to prove, so assume that \( b \) is an element of \( R \) which is not irreducible. Then we can write \( b = {ac} \) where neither \( a \) nor \( c \) is a unit. By condition (i),
\[
n\left( b\right) = n\left( {ac}\right) = n\left( a\right) n\left( c\right)
\]
and since \( a, c \) are not units, then by condition (ii), \( n\left( a\right) < n\left( b\right) \) and \( n\left( c\right) < \) \( n\left( b\right) \) .
If \( a, c \) are irreducible, then we are finished. If not, their norms are smaller than the norm of \( b \), and so by induction we can write them as products of irreducibles, thus finding an irreducible decomposition of \( b \) .
Exercise 2.1.2 Let \( D \) be squarefree. Consider \( R = \mathbb{Z}\left\lbrack \sqrt{D}\right\rbrack \) . Show that every element of \( R \) can be written as a product of irreducible elements.
Exercise 2.1.3 Let \( R = \mathbb{Z}\left\lbrack \sqrt{-5}\right\rbrack \) . Show that \( 2,3,1 + \sqrt{-5} \), and \( 1 - \sqrt{-5} \) are irreducible in \( R \), and that they are not associates.
We now observe that \( 6 = 2 \cdot 3 = \left( {1 + \sqrt{-5}}\right) \left( {1 - \sqrt{-5}}\right) \), so that \( R \) does not have unique factorization into irreducibles.
We will say that \( R \), an integral domain, is a unique factorization domain if:
(i) every element of \( R \) can be written as a product of irreducibles; and
(ii) this factorization is essentially unique in the sense that if \( a = {\pi }_{1}\cdots {\pi }_{r} \) . and \( a = {\tau }_{1}\cdots {\tau }_{s} \), then \( r = s \) and after a suitable permutation, \( {\pi }_{i} \sim {\tau }_{i} \) .
Exercise 2.1.4 Let \( R \) be a domain satisfying (i) above. Show that (ii) is equivalent to \( \left( {\mathrm{{ii}}}^{ \star }\right) \) : if \( \pi \) is irreducible and \( \pi \) divides \( {ab} \), then \( \pi \mid a \) or \( \pi \mid b \) .
An ideal \( I \subseteq R \) is called principal if it can be generated by a single element of \( R \) . A domain \( R \) is then called a principal ideal domain if every ideal of \( R \) is principal.
Exercise 2.1.5 Show that if \( \pi \) is an irreducible element of a principal ideal domain, then \( \left( \pi \right) \) is a maximal ideal,(where \( \left( x\right) \) denotes the ideal generated by the element \( x \) ).
Theorem 2.1.6 If \( R \) is a principal ideal domain, then \( R \) is a unique factorization domain.
Proof. Let \( S \) be the set of elements of \( R \) that cannot be written as a product of irreducibles. If \( S \) is nonempty, take \( {a}_{1} \in S \) . Then \( {a}_{1} \) is not irreducible, so we can write \( {a}_{1} = {a}_{2}{b}_{2} \) where \( {a}_{2},{b}_{2} \) are not units. Then \( \left( {a}_{1}\right) \subsetneqq \left( {a}_{2}\right) \) and \( \left( {a}_{1}\right) \subsetne
|
Assuming the \( {ABC} \) Conjecture, show that there are infinitely many primes \( p \) such that \( {2}^{p - 1} ≢ 1\left( {\;\operatorname{mod}\;{p}^{2}}\right) \).
|
null
|
Proposition 9.32 For each \( j = 1,2,\ldots, n \), define a domain \( \operatorname{Dom}\left( {P}_{j}\right) \subset \) \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) as follows:
\[
\operatorname{Dom}\left( {P}_{j}\right) = \left\{ {\psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \mid {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \in {L}^{2}\left( {\mathbb{R}}^{n}\right) }\right\} ,
\]
where \( \widehat{\psi } \) is the Fourier transform of \( \psi \) . Define \( {P}_{j} \) on this domain by
\[
{P}_{j}\psi = {\mathcal{F}}^{-1}\left( {\hslash {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) }\right)
\]
Then \( {P}_{j} \) is self-adjoint on \( \operatorname{Dom}\left( {P}_{j}\right) \) .
The domain \( \operatorname{Dom}\left( {P}_{j}\right) \) of \( {P}_{j} \) can also be described as the set of all \( \psi \in \) \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) such that \( \partial \psi /\partial {x}_{j} \), computed in the distribution sense, belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) . For any \( \psi \in \operatorname{Dom}\left( {P}_{j}\right) \), we have \( {P}_{j}\psi = - i\hslash \partial \psi /\partial {x}_{j} \), where \( \partial \psi /\partial {x}_{j} \) is computed in the distribution sense.
Saying that the distributional derivative of \( \psi \) belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) means (Proposition A.29) that there exists a (unique) \( \phi \) in \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) such that
\[
- \left\langle {\frac{\partial \chi }{\partial {x}_{j}},\psi }\right\rangle = \langle \chi ,\phi \rangle
\]
for all \( \chi \in {C}_{c}^{\infty }\left( {\mathbb{R}}^{n}\right) \) . If \( \psi \) is continuously differentiable, then the distributional derivative of \( \psi \) coincides with the ordinary derivative of \( \psi \) . Thus, if \( \psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \) is continuously differentiable, then \( \psi \) belongs to \( \operatorname{Dom}\left( {P}_{j}\right) \) if and only if \( \partial \psi /\partial {x}_{j} \), computed in the pointwise sense, belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) , in which case \( {P}_{j}\psi = - i\hslash \partial \psi /\partial {x}_{j} \) . On the other hand, if \( \psi \in \operatorname{Dom}\left( {P}_{j}\right) \), it is not necessarily the case that \( \psi \) is continuously differentiable.
In the case \( n = 1 \), the domain of \( {P}_{1} \) certainly contains \( {C}_{c}^{\infty }\left( \mathbb{R}\right) \), since each element \( \psi \) of \( {C}_{c}^{\infty }\left( \mathbb{R}\right) \) is a Schwartz function (Definition A.15), so that \( \widehat{\psi } \) is also a Schwartz function, in which case \( k\widehat{\psi }\left( k\right) \) belongs to \( {L}^{2}\left( \mathbb{R}\right) \) . Now, as shown in Sect. 9.7, the operator - i \( \hslash d/{dx} \) is essentially self-adjoint on \( {C}_{c}^{\infty }\left( \mathbb{R}\right) \), which means that this operator has a unique self-adjoint extension. This self-adjoint extension must, therefore, agree with the operator \( {P}_{1} \) in the \( n = 1 \) case of Proposition 9.32.
Lemma 9.33 Suppose \( \psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \) has the property that \( \partial \psi /\partial {x}_{j} \), computed in the distribution sense, is equal to an \( {L}^{2} \) function \( \phi \) . Then \( \widehat{\phi }\left( \mathbf{k}\right) = \) \( i{k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \), showing that \( {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \) belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) .
Conversely, suppose \( \psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \) has the property that \( {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \) belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) . Then \( \partial \psi /\partial {x}_{j} \), computed in the distribution sense, is equal to the \( {L}^{2} \) function \( {\mathcal{F}}^{-1}\left( {i{k}_{j}\mathcal{F}\left( \psi \right) }\right) \) .
Proof. Suppose \( \partial \psi /\partial {x}_{j} \), computed in the distribution sense, is equal to the \( {L}^{2} \) function \( \phi \) (see Definition A.28). Then by the unitarity of the Fourier transform (Theorem A.19) and its behavior with respect to differentiation (Proposition A.17), we have
\[
\langle \chi ,\phi \rangle = - \left\langle {\frac{\partial \chi }{\partial {x}_{j}},\psi }\right\rangle
\]
\[
= - \left\langle {i{k}_{j}\mathcal{F}\left( \chi \right) ,\mathcal{F}\left( \psi \right) }\right\rangle
\]
for all \( \chi \in {C}_{c}^{\infty }\left( \mathbb{R}\right) \) . Thus,
\[
\langle \mathcal{F}\left( \chi \right) ,\mathcal{F}\left( \phi \right) \rangle = - \left\langle {i{k}_{j}\mathcal{F}\left( \chi \right) ,\mathcal{F}\left( \psi \right) }\right\rangle ,\;\chi \in {C}_{c}^{\infty }\left( \mathbb{R}\right) .
\]
Writing this equality out as an integral, we have
\[
{\int }_{{\mathbb{R}}^{n}}\overline{\widehat{\chi }\left( \mathbf{k}\right) }\widehat{\phi }\left( \mathbf{k}\right) d\mathbf{k} = - {\int }_{{\mathbb{R}}^{n}}\overline{i{k}_{j}\widehat{\chi }\left( \mathbf{k}\right) }\widehat{\psi }\left( \mathbf{k}\right) d\mathbf{k}
\]
\[
= {\int }_{{\mathbb{R}}^{n}}\overline{\widehat{\chi }\left( \mathbf{k}\right) }i{k}_{j}\widehat{\psi }\left( \mathbf{k}\right) d\mathbf{k}
\]
(9.17)
for all \( \chi \in {C}_{c}^{\infty }\left( {\mathbb{R}}^{n}\right) \) .
We now claim that because (9.17) holds for all \( \chi \in {C}_{c}^{\infty }\left( {\mathbb{R}}^{n}\right) \), we must have \( \widehat{\phi }\left( \mathbf{k}\right) = i{k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \) for almost every \( \mathbf{k} \) . Using the Stone-Weierstrass theorem and Theorem A.10, it is not hard to show that the space of smooth functions with support in \( \left\lbrack {a, b}\right\rbrack \) is dense in \( {L}^{2}\left( \left\lbrack {a, b}\right\rbrack \right) \), for all \( a < b \in \mathbb{R} \) . Since both \( \widehat{\phi } \) and \( {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \) are locally square-integrable, we see that these two functions are equal almost everywhere on \( \left\lbrack {a, b}\right\rbrack \), for all \( a < b \in \mathbb{R} \), and hence equal almost everywhere on \( \mathbb{R} \) .
Since \( \widehat{\phi } \) is globally square-integrable, so is \( {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \) . Furthermore, by the injectivity of the \( {L}^{2} \) Fourier transform, we have
\[
\frac{\partial \psi }{\partial {x}_{j}} = \phi = {\mathcal{F}}^{-1}\left( {i{k}_{j}\mathcal{F}\left( \psi \right) }\right)
\]
as claimed.
The argument for the second part of the lemma is similar and left as an exercise (Exercise 12). ∎
Proof of Proposition 9.32. By Proposition 9.30, the operator of multiplication by \( {k}_{j} \) is an unbounded self-adjoint operator on \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \), with domain equal to the set of \( \phi \) for which \( {k}_{j}\phi \left( \mathbf{k}\right) \) belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) . It then follows from the unitarity of the Fourier transform that \( {P}_{j} = \hslash {\mathcal{F}}^{-1}{M}_{{k}_{j}}\mathcal{F} \) is self-adjoint on \( {\mathcal{F}}^{-1}\left( {\operatorname{Dom}\left( {M}_{{k}_{j}}\right) }\right) \), where \( {M}_{{k}_{j}} \) denotes multiplication by \( {k}_{j} \) .
The second characterization of \( \operatorname{Dom}\left( {P}_{j}\right) \) follows from Lemma 9.33. ∎
Proposition 9.34 Define a domain \( \operatorname{Dom}\left( \Delta \right) \) as follows:
\[
\operatorname{Dom}\left( \Delta \right) = \left\{ {\psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \left| {\;{\left| \mathbf{k}\right| }^{2}\widehat{\psi }\left( \mathbf{k}\right) \in {L}^{2}\left( {\mathbb{R}}^{n}\right) }\right. }\right\} .
\]
Define \( \Delta \) on this domain by the expression
\[
{\Delta \psi } = - {\mathcal{F}}^{-1}\left( {{\left| \mathbf{k}\right| }^{2}\widehat{\psi }\left( \mathbf{k}\right) }\right)
\]
(9.18)
where \( \widehat{\psi } \) is the Fourier transform of \( \psi \) and \( {\mathcal{F}}^{-1} \) is the inverse Fourier. Then \( \Delta \) is self-adjoint on \( \operatorname{Dom}\left( \Delta \right) \) .
The domain \( \operatorname{Dom}\left( \Delta \right) \) may also be described as the set of all \( \psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \) such that \( {\Delta \psi } \), computed in the distribution sense, belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) . If \( \psi \in \operatorname{Dom}\left( \Delta \right) \), then \( {\Delta \psi } \) as defined by (9.18) agrees with \( {\Delta \psi } \) computed in the distribution sense.
The proof of Proposition 9.34 is extremely similar to that of Proposition 9.32 and is omitted. Of course, the kinetic energy operator \( - {\hslash }^{2}\Delta /\left( {2m}\right) \) is also self-adjoint on the same domain as \( \Delta \) . It is easy to see from (9.18) and the unitarity of the Fourier transform that \( - {\hslash }^{2}\Delta /\left( {2m}\right) \) is non-negative, that is, that
\[
\left\langle {\psi , - \frac{{\hslash }^{2}}{2m}{\Delta \psi }}\right\rangle \geq 0
\]
for all \( \psi \in \operatorname{Dom}\left( \Delta \right) \) .
Using the same reasoning as in Sects. 9.6 and 9.7, it is not hard to show that the operators \( {P}_{j} \) and \( \Delta \) are essentially self-adjoint on \( {C}_{c}^{\infty }\left( {\mathbb{R}}^{n}\right) \) . See Exercise 16.
Care must be exercised in applying Proposition 9.34. Although the function
\[
\psi \left( \mathbf{x}\right) \mathrel{\text{:=}} \frac{1}{\left| \mathbf{x}\right| }
\]
is harmonic on \( {\mathbb{R}}^{3} \smallsetminus \{ 0\} \), the Laplacian over \( {\mathbb{R}}^{3} \) of \( \psi \) in the distribution sense is not zero (Exercise 13). (It can be shown, by carefully analyzing the calculation in the proof of Proposition 9.35, that \( {\Delta \psi } \) is a nonzero multiple of a \( \delta \) -function.) This example shows that if a function \( \psi \) has a singularity, calculating the Laplacian of \( \psi \) awa
|
Proposition 9.32 For each \( j = 1,2,\ldots, n \), define a domain \( \operatorname{Dom}\left( {P}_{j}\right) \subset \) \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) as follows:
\[
\operatorname{Dom}\left( {P}_{j}\right) = \left\{ {\psi \in {L}^{2}\left( {\mathbb{R}}^{n}\right) \mid {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) \in {L}^{2}\left( {\mathbb{R}}^{n}\right) }\right\} ,
\]
where \( \widehat{\psi } \) is the Fourier transform of \( \psi \) . Define \( {P}_{j} \) on this domain by
\[
{P}_{j}\psi = {\mathcal{F}}^{-1}\left( {\hslash {k}_{j}\widehat{\psi }\left( \mathbf{k}\right) }\right)
\]
Then \( {P}_{j} \) is self-adjoint on \( \operatorname{Dom}\left( {P}_{j}\right) \).
|
Proof of Proposition 9.32. By Proposition 9.30, the operator of multiplication by \( {k}_{j} \) is an unbounded self-adjoint operator on \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \), with domain equal to the set of \( \phi \) for which \( {k}_{j}\phi \left( \mathbf{k}\right) \) belongs to \( {L}^{2}\left( {\mathbb{R}}^{n}\right) \) . It then follows from the unitarity of the Fourier transform that \( {P}_{j} = \hslash {\mathcal{F}}^{-1}{M}_{{k}_{j}}\mathcal{F} \) is self-adjoint on \( {\mathcal{F}}^{-1}\left( {\operatorname{Dom}\left( {M}_{{k}_{j}}\right) }\right) \), where \( {M}_{{k}_{j}} \) denotes multiplication by \( {k}_{j} \) .
The second characterization of \( \operatorname{Dom}\left( {P}_{j}\right) \) follows from Lemma 9.33. ∎
|
Proposition 7.4 Suppose \( \mathbf{A} \) and \( {\mathbf{A}}^{\prime } \) are two additive categories, and suppose \( \mathbf{A} \) contains a biproduct of any two objects. Suppose \( F : \mathbf{A} \rightarrow {\mathbf{A}}^{\prime } \) is a covariant functor. Then the following are equivalent.
i) \( F \) is additive, that is, \( F\left( {f + g}\right) = F\left( f\right) + F\left( g\right) \) for any \( f, g \in \operatorname{Hom}(A \) , \( B);A, B \in \mathbf{A} \) .
ii) \( F\left( {{A}_{1} \oplus {A}_{2}}\right) \approx F\left( {A}_{1}\right) \oplus F\left( {A}_{2}\right) \) for all \( {A}_{1},{A}_{2} \in \mathbf{A} \) .
iii) \( F\left( {A \oplus A}\right) \approx F\left( A\right) \oplus F\left( A\right) \) for all \( A \in \mathbf{A} \) .
Remark: As with Proposition 6.1, tacitly " \( F\left( {A \oplus B}\right) \approx F\left( A\right) \oplus F\left( B\right) \) " means that if
\[
A\overset{\varphi }{ \rightarrow }A \oplus B\overset{\psi }{ \leftarrow }B
\]
defines \( A \oplus B \) as a coproduct, then
\[
F\left( A\right) \overset{F\left( \varphi \right) }{ \rightarrow }F\left( {A \oplus B}\right) \overset{F\left( \psi \right) }{ \leftarrow }F\left( B\right)
\]
defines \( F\left( {A \oplus B}\right) \) as a coproduct.
Proof: (i) \( \Rightarrow \) (ii) \( \Rightarrow \) (iii) \( \Rightarrow \) (i) works the same way here as it did in Proposition 6.1. The technical point-that \( F\left( {\pi }_{1}\right) \) and \( F\left( {\pi }_{2}\right) \) are the \( {\pi }_{1}^{\prime } \) and \( {\pi }_{2}^{\prime } \) for which \( \left( {F\left( {A \oplus A}\right) ;F\left( {\varphi }_{1}\right), F\left( {\varphi }_{2}\right) ,{\pi }_{1}^{\prime },{\pi }_{2}^{\prime }}\right) \) is a biproduct in \( {\mathbf{A}}^{\prime } \) -is even the same. To see this, we must establish that \( F\left( {\pi }_{1}\right) \) and \( F\left( {\pi }_{2}\right) \) are fillers for the appropriate diagrams in the proof of Proposition 7.2. That is, we must check that \( {i}_{F\left( A\right) } = F\left( {\pi }_{1}\right) F\left( {\varphi }_{1}\right) = F\left( {\pi }_{2}\right) F\left( {\varphi }_{2}\right) \), while \( 0 = \) \( F\left( {\pi }_{1}\right) F\left( {\varphi }_{2}\right) = F\left( {\pi }_{2}\right) F\left( {\varphi }_{1}\right) \) . But \( {i}_{F\left( A\right) } = F\left( {i}_{A}\right) = F\left( {{\pi }_{1}{\varphi }_{1}}\right) = F\left( {\pi }_{1}\right) F\left( {\varphi }_{1}\right) \) ; similarly, \( {i}_{F\left( A\right) } = F\left( {\pi }_{2}\right) F\left( {\varphi }_{2}\right) \) . Also, \( F\left( {\pi }_{1}\right) F\left( {\varphi }_{2}\right) = F\left( {{\pi }_{1}{\varphi }_{2}}\right) = F\left( 0\right) \), and similarly \( F\left( {\pi }_{2}\right) F\left( {\varphi }_{1}\right) = F\left( 0\right) \), so it suffices to show that \( F\left( 0\right) = 0 \), that is, \( F \) (zero morphism) \( = \) zero morphism. Since the zero morphism is precisely the morphism which factors through "the" zero object (both in \( \mathbf{A} \) and \( {\mathbf{A}}^{\prime } \) ), it suffices to show that \( F \) (zero object) \( = \) zero object.
Let \( O \) denote a zero object of \( \mathbf{A} \) . Note that \( \left( {O;i, i, i, i}\right) \) is a biproduct of \( O \) with \( O \) in \( \mathbf{A} \), where \( i = {i}_{O} \) is the only element of \( \operatorname{Hom}\left( {O, O}\right) \) . Hence
\[
O\overset{i}{ \rightarrow }O\overset{i}{ \leftarrow }O
\]
is a coproduct in \( \mathbf{A} \), so
\[
F\left( O\right) \overset{F\left( i\right) }{ \rightarrow }F\left( O\right) \overset{F\left( i\right) }{ \leftarrow }F\left( O\right)
\]
is a coproduct in \( {\mathbf{A}}^{\prime } \) . By Proposition 7.2, there exist unique \( {\pi }_{1},{\pi }_{2} \in \) \( \operatorname{Hom}\left( {F\left( O\right), F\left( O\right) }\right) \) such that \( \left( {F\left( O\right) ;F\left( i\right), F\left( i\right) ,{\pi }_{1},{\pi }_{2}}\right) \) is a biproduct. Letting \( F\left( i\right) \) play the role of \( {\varphi }_{1},{i}_{F\left( O\right) } = {\pi }_{1}F\left( i\right) \) . Letting \( F\left( i\right) \) play the role of \( {\varphi }_{2},{\pi }_{1}F\left( i\right) = 0 \) . Hence \( {i}_{F\left( O\right) } = 0 \), so \( F\left( O\right) \) is a zero object. (See Exercise 4.)
Note that, in the above, \( {\mathbf{A}}^{\prime } \) did not have to contain biproducts, but it did have to contain a zero object.
In the next section we shall describe two more constructions whose presence (with biproducts) specify a pre-Abelian category.
## 7.3 Kernels and Cokernels
We are now very close to what we need for homological algebra in the abstract, at least for the domain category. We start with an additive category A. Suppose \( A, B \in \mathbf{A} \), and \( f \in \operatorname{Hom}\left( {A, B}\right) \) . What we need is some way of defining categorically the objects we are used to having around for modules. They are the kernel, the image, and the cokernel. The image will be a bit of a problem at this stage, so we shall stick to the kernel and cokernel for now.
A kernel of \( f \) is defined in category-theoretic terms as follows. A kernel consists of an object \( K \in \mathbf{A} \) and a morphism \( j \in \operatorname{Hom}\left( {K, A}\right) \) such that \( {fj} = 0 \) and, whenever \( C \in \mathbf{A} \) and \( g \in \operatorname{Hom}\left( {C, A}\right) \) satisfies \( {fg} = 0 \) , there exists a unique filler \( \bar{g} \)

forming a commutative diagram. Note that since the definition is categorical, a kernel is only unique up to isomorphism (see below). This is a general phenomenon that one must simply get used to. Abusing the terminology a bit, we shall often say that \( j \) is a kernel for \( f \) .
A cokernel of \( f \) is defined similarly, with arrows reversed. A cokernel consists of an object \( D \in \mathbf{A} \) and a morphism \( p \in \operatorname{Hom}\left( {B, D}\right) \) such that \( {pf} = 0 \) and whenever \( C \in \mathbf{A} \) and \( g \in \operatorname{Hom}\left( {B, C}\right) \) satisfies \( {gf} = 0 \), there exists a unique filler \( \bar{g} \)

forming a commutative diagram. If \( \mathbf{A} = {}_{R}\mathbf{M} \), some \( R \), then \( D = B/f\left( A\right) \) works.
Note that a cokernel in \( \mathbf{A} \) is a kernel in \( {\mathbf{A}}^{\text{op }} \) . Also, one can use an "overcategory" to define a kernel: Given \( A, B \), and \( f \in \operatorname{Hom}\left( {A, B}\right) \), consider the category of all pairs \( \left( {C, g}\right) \) such that \( C \in \mathbf{A}, g \in \operatorname{Hom}\left( {C, A}\right) \), and \( {fg} = 0 \) . A morphism from \( \left( {C, g}\right) \) to \( \left( {D, h}\right) \) is a \( \varphi \in \operatorname{Hom}\left( {C, D}\right) \) such that \( g = {h\varphi } \), that is,

commutes. Then a kernel is a final object in this category. (See Exercise 6.) In particular, any two kernels are isomorphic in this category, hence are isomorphic in A. Similar considerations, with arrows reversed, apply to cokernels.
Suppose one has a commutative square

If kernels are taken, one has a diagram

in which \( {f}^{\prime }\left( {\varphi j}\right) = {f}^{\prime }{\varphi j} = {\psi fj} = 0 \) . Hence \( {\varphi j} \) factors through \( {K}^{\prime } \) :

Similarly, attaching cokernels produces a commutative rectangle

Given later developments, one can make a functorial interpretation of all this; this will be done in Section 7.5.
We now make a definition: An additive category A is pre-Abelian if it contains a biproduct of any two objects, and if any morphism has both a kernel and a cokernel. It turns out that to make a start on abstract homological algebra, all we need for our domain category is a pre-Abelian category with enough projectives and/or enough injectives. Further conditions then force, for example, \( {\mathrm{{Ext}}}^{0} \approx \) Hom. (It should probably be noted that there are ways of manufacturing Ext without using projectives or injectives. They are less than transparent and are inappropriate for this book. See, for example, Hilton [33, Chapter 4].) When we abstract the range category, we shall need more.
A couple of quick remarks are in order. First of all, by the usual subtraction trickery, if \( f \in \operatorname{Hom}\left( {A, B}\right) \), then \( f \) is an epimorphism \( \Leftrightarrow (\forall C \in \) \( \mathbf{A}, g \in \operatorname{Hom}\left( {B, C}\right) : {gf} = 0 \Rightarrow g = 0) \), and \( f \) is a monomorphism \( \Leftrightarrow \left( {\forall C \in \mathbf{A}, g \in \operatorname{Hom}\left( {C, A}\right) : {fg} = 0 \Rightarrow g = 0}\right) \) . That is,"right cancellable" means "right nonzero divisor", and ditto on the left. More subtle is the fact that kernels are monic. Suppose \( f \in \operatorname{Hom}\left( {A, B}\right) \), with kernel \( j : K \rightarrow A \) . Then \( j \) is monic. To see this, suppose \( g \in \operatorname{Hom}\left( {C, K}\right) \) and \( {jg} = 0 \) . Then \( g \) is a filler for

But 0 is also a filler, so \( g = 0 \) by uniqueness. Similarly, if \( p : B \rightarrow D \) is a cokernel of \( f \), then \( p \) is epic. One of the simplest w
|
Proposition 7.4 Suppose \( \mathbf{A} \) and \( {\mathbf{A}}^{\prime } \) are two additive categories, and suppose \( \mathbf{A} \) contains a biproduct of any two objects. Suppose \( F : \mathbf{A} \rightarrow {\mathbf{A}}^{\prime } \) is a covariant functor. Then the following are equivalent.
i) \( F \) is additive, that is, \( F\left( {f + g}\right) = F\left( f\right) + F\left( g\right) \) for any \( f, g \in \operatorname{Hom}(A \) , \( B);A, B \in \mathbf{A} \) .
ii) \( F\left( {{A}_{1} \oplus {A}_{2}}\right) \approx F\left( {A}_{1}\right) \oplus F\left( {A}_{2}\right) \) for all \( {A}_{1},{A}_{2} \in \mathbf{A} \) .
iii) \( F\left( {A \oplus A}\right) \approx F\left( A\right) \oplus F\left( A\right) \) for all \( A \in \mathbf{A} \) .
|
Proof: (i) \( \Rightarrow \) (ii) \( \Rightarrow \) (iii) \( \Rightarrow \) (i) works the same way here as it did in Proposition 6.1. The technical point-that \( F\left( {\pi }_{1}\right) \) and \( F\left( {\pi }_{2}\right) \) are the \( {\pi }_{1}^{\prime } \) and \( {\pi }_{2}^{\prime } \) for which \( \left( {F\left( {A \oplus A}\right) ;F\left( {\varphi }_{1}\right), F\left( {\varphi }_{2}\right) ,{\pi }_{1}^{\prime },{\pi }_{2}^{\prime }}\right) \) is a biproduct in \( {\mathbf{A}}^{\prime } \) -is even the same. To see this, we must establish that \( F\left( {\pi }_{1}\right) \) and \( F\left( {\pi }_{2}\right) \) are fillers for the appropriate diagrams in the proof of Proposition 7.2. That is, we must check that \( {i}_{F\left( A\right) } = F\left( {\pi }_{1}\right) F\left( {\varphi }_{1}\right) = F\left( {\pi }_{2}\right) F\left( {\varphi }_{2}\right) \), while \( 0 = \) \( F\left( {\pi }_{1}\right) F\left( {\varphi }_{2}\right) = F\left( {\pi }_{2}\right) F\left( {\varphi }_{1}\right) \) . But \( {i}_{F\left( A\right) } = F\left( {i}_{A}\right) = F\left( {{\pi }_{1}{\varphi }_{1}}\right) = F\left( {\pi }_{1}\right) F\left( {\varphi }_{1}\right) \) ; similarly, \( {i}_{F\left( A\right) } = F\left( {\pi }_{2}\right) F\left( {\varphi }_{2}\right) \) . Also, \( F\left( {\pi }_{1}\right) F\left( {\varphi }_{2}\right) = F\left( {{\pi }_{1}{\varphi }_{2}}\
|
Theorem 12.6.2 Let \( \mathcal{Q} \) be the incidence structure whose points are the vectors of \( {C}^{ * } \), and whose lines are triples of mutually orthogonal vectors. Then either \( \mathcal{Q} \) has no lines, or \( \mathcal{Q} \) is a generalized quadrangle, possibly degenerate, with lines of size three.
Proof. A generalized quadrangle has the property that given any line \( \ell \) and a point \( P \) off that line, there is a unique point on \( \ell \) collinear with \( P \) . We show that \( \mathcal{Q} \) satisfies this axiom.
Suppose that \( x, y \), and \( a - b - x - y \) are the three points of a line of \( \mathcal{Q} \), and let \( z \) be an arbitrary vector in \( {C}^{ * } \), not equal to any of these three. Then
\[
\langle z, x\rangle + \langle z, y\rangle + \langle z, a - b - x - y\rangle = \langle z, a - b\rangle = 2.
\]
Since each of the three terms is either 0 or 1 , it follows that there is a unique term equal to 0, and hence \( z \) is collinear with exactly one of the three points of the line.
Therefore, \( \mathcal{Q} \) is a generalized quadrangle with lines of size three.
From our earlier work on generalized quadrangles with lines of size three, we get the following result.
Corollary 12.6.3 If \( \mathcal{Q} \) is the incidence structure arising from a star-closed indecomposable set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \), then one of the following holds:
(a) \( \mathcal{Q} \) has no lines;
(b) \( \mathcal{Q} \) is a set of concurrent lines of size three;
(c) \( \mathcal{Q} \) is the unique generalized quadrangle of order \( \left( {2,1}\right) ,\left( {2,2}\right) \), or \( \left( {2,4}\right) \) .
In the next section we will describe families of lines that realize each of the five cases enumerated above.
## 12.7 Root Systems
In this section we present five root systems, known as \( {D}_{n},{A}_{n},{E}_{8},{E}_{7} \) , and \( {E}_{6} \) . We will show that the corresponding sets of lines are indecomposable and star-closed, and that they realize the five possibilities of Corollary 12.6.3.
We have already defined \( {D}_{n} \), and shown that the corresponding set of lines is star-closed and indecomposable. We leave it as an exercise to confirm that the corresponding incidence structure \( \mathcal{Q} \) is a set of concurrent lines of size three.
The next root system is \( {A}_{n} \), which consists of all nonzero vectors of the form \( {e}_{i} - {e}_{j} \), where \( {e}_{i} \) and \( {e}_{j} \) run over the standard basis of \( {\mathbb{R}}^{n + 1} \) . This is a subset of \( {D}_{n + 1} \), and in fact is the set of all vectors orthogonal to 1 . We leave the proof of the next result as an exercise.
Lemma 12.7.1 The set of lines corresponding to the root system \( {A}_{n} \) is star-closed and indecomposable. The incidence structure \( \mathcal{Q} \) has no lines. \( ▱ \)
Our next root system is called \( {E}_{8} \), and lives in \( {\mathbb{R}}^{8} \) . It contains the vectors of \( {D}_{8} \), together with the 128 vectors \( x \) such that \( {x}_{i} \in \left\{ {-\frac{1}{2},\frac{1}{2}}\right\} \) for \( i = 1,\ldots ,8 \) and the number of positive entries is even.
Theorem 12.7.2 The root system \( {E}_{8} \) contains exactly 240 vectors. The lines spanned by these vectors form an indecomposable star-closed set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) in \( {\mathbb{R}}^{8} \) . The generalized quadrangle \( \mathcal{Q} \) associated with this set of lines is the unique generalized quadrangle of order \( \left( {2,4}\right) \) .
Proof. This is immediate, since \( {D}_{8} \) contains 112 vectors, and there are 128 further vectors.
First we show that the set of lines spanned by \( {E}_{8} \) is indecomposable. Since the set of lines spanned by \( {D}_{8} \) is indecomposable, any decomposition will have all the lines spanned by \( {D}_{8} \) in one part. Any vector in \( {E}_{8} \smallsetminus {D}_{8} \) that is orthogonal to \( {e}_{1} + {e}_{2} \) has its first two entries of opposite sign, while any vector orthogonal to \( {e}_{1} - {e}_{2} \) has its first two entries of the same sign. Therefore, there are no vectors in \( {E}_{8} \smallsetminus {D}_{8} \) orthogonal to all the vectors in \( {D}_{8} \) .
To show that \( {E}_{8} \) is star-closed, we consider pairs of vectors \( x, y \) that have inner product -1, and show that in all cases \( - x - y \in {E}_{8} \) . Observe that permuting coordinates and reversing the sign of an even number of entries are operations that fix \( {E}_{8} \) and preserve inner products, and so we can freely use these to simplify our calculations. Suppose firstly that \( x \) and \( y \) are both in \( {E}_{8} \smallsetminus {D}_{8} \) . Then we can assume that \( x = \frac{1}{2}\mathbf{1} \), and so \( y \) has six entries equal to \( - \frac{1}{2} \) and two equal to \( \frac{1}{2} \) . Therefore, \( - x - y \in {D}_{8} \), and so this star can be closed. Secondly, suppose that \( x \) is in \( {E}_{8} \smallsetminus {D}_{8} \) and that \( y \in {D}_{8} \) . Once again we can assume that \( x = \frac{1}{2}\mathbf{1} \), and therefore \( y \) has two entries equal to -1 . Then \( - x - y \) has two entries of \( \frac{1}{2} \) and six equal to \( - \frac{1}{2} \) , and so lies in \( {E}_{8} \smallsetminus {D}_{8} \) . Finally, if \( x \) and \( y \) are both in \( {D}_{8} \), then we appeal to the fact that \( {D}_{8} \) is star-closed.
By Theorem 12.4.2 we can select any pair of nonorthogonal lines as \( \langle a\rangle \) and \( \langle b\rangle \), so choose \( a = {e}_{1} + {e}_{2} \) and \( b = - {e}_{1} + {e}_{3} \), which implies that \( c = - {e}_{2} - {e}_{3} \) . We count the number of lines orthogonal to this star. A vector \( x \) is orthogonal to this star if and only if its first three coordinates are \( {x}_{1} = \alpha ,{x}_{2} = - \alpha ,{x}_{3} = \alpha \) . Therefore, if \( x \in {D}_{8} \), we have \( \alpha = 0 \), and there are thus \( 4\left( \begin{array}{l} 5 \\ 2 \end{array}\right) = {40} \) such vectors. If \( x \in {E}_{8} \smallsetminus {D}_{8} \), then the remaining five coordinates have either 1,3 , or 5 negative entries, and so there are \( \left( \begin{array}{l} 5 \\ 1 \end{array}\right) + \left( \begin{array}{l} 5 \\ 3 \end{array}\right) + \left( \begin{array}{l} 5 \\ 5 \end{array}\right) = {16} \) such vectors. Because \( \alpha \) can be \( \pm \frac{1}{2} \), this yields 32 vectors. Therefore, there are 72 vectors in \( {E}_{8} \) orthogonal to the star, or 36 lines orthogonal to the star. Since there are 120 lines altogether, this means that the star together with \( A, B \), and \( C \) contain 84 lines. Since \( A, B \), and \( C \) have the same size, this shows that they each contain 27 lines. Thus \( \mathcal{Q} \) is a generalized quadrangle with 27 points, and so is the unique generalized quadrangle of order \( \left( {2,4}\right) \) .
We define two further root systems. First \( {E}_{7} \) is the set of vectors in \( {E}_{8} \) orthogonal to a fixed vector, while \( {E}_{6} \) is the subset of \( {E}_{8} \) formed by the set of vectors orthogonal to a fixed pair of vectors with inner product \( \pm 1 \) . The next result outlines the properties of these root systems; the proofs are similar to those for \( {E}_{8} \), and so are left as exercises.
Lemma 12.7.3 The root systems \( {E}_{6} \) and \( {E}_{7} \) contain 72 and 126 vectors respectively. The sets of lines spanned by the vectors of \( {E}_{6} \) and \( {E}_{7} \) are star-closed and indecomposable. The associated generalized quadrangles are the unique generalized quadrangles of order \( \left( {2,1}\right) \) and \( \left( {2,2}\right) \), respectively.
Theorem 12.7.4 An indecomposable star-closed set of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) is the set of lines spanned by the vectors in one of the root systems \( {E}_{6} \) , \( {E}_{7},{E}_{8},{A}_{n} \), or \( {D}_{n} \) (for some \( n \) ).
Proof. The Gram matrix of the vectors in \( {C}^{ * } \) determines the Gram matrix of the entire collection of lines in \( \mathcal{L} \), which in turn determines \( \mathcal{L} \) up to an orthogonal transformation. Since these five root systems give the only five possible Gram matrices for the vectors in \( {C}^{ * } \), there are no further indecomposable star-closed sets of lines at \( {60}^{ \circ } \) and \( {90}^{ \circ } \) .
We summarize some of the properties of our five root systems in the following table.
<table><thead><tr><th>Name</th><th>Size</th><th>\( \left| {C}^{ * }\right| \)</th></tr></thead><tr><td>\( {D}_{n} \)</td><td>\( n\left( {{2n} - 2}\right) \)</td><td>\( {2n} - 5 \)</td></tr><tr><td>\( {A}_{n} \)</td><td>\( n\left( {n + 1}\right) \)</td><td>\( n - 2 \)</td></tr><tr><td>\( {E}_{8} \)</td><td>240</td><td>27</td></tr><tr><td>\( {E}_{7} \)</td><td>126</td><td>15</td></tr><tr><td>\( {E}_{6} \)</td><td>72</td><td>9</td></tr></table>
## 12.8 Consequences
We begin by translating Theorem 12.7.4 into graph theory, then determine some of its consequences.
Corollary 12.8.1 Let \( X \) be a connected graph with smallest eigenvalue at least -2, and let \( A \) be its adjacency matrix. Then either \( X \) is a generalized line graph, or \( A + {2I} \) is the Gram matrix of a set of vectors in \( {E}_{8} \) .
Proof. Let \( S \) be a set of vectors with Gram matrix \( {2I} + A \) . Then the star-closure of \( S \) is contained in the set of lines spanned by the vectors in \( {E}_{8} \) or \( {D}_{n} \) .
This implies that a connected graph with minimum eigenvalue at least -2 and more than 120 vertices must be a generalized line graph. We can be more precise than this, at the cost of some effort.
Theorem 12.8.2 Let \( X \) be a graph with least eigenvalue at least -2 . If \( X \) has more than 36 vertices or maximum valency greater than 28, it is a generalized line graph.
Proof. If \( X \) is not a generalized line graph, then \( A\left( X\ri
|
Theorem 12.6.2 Let \( \mathcal{Q} \) be the incidence structure whose points are the vectors of \( {C}^{ * } \), and whose lines are triples of mutually orthogonal vectors. Then either \( \mathcal{Q} \) has no lines, or \( \mathcal{Q} \) is a generalized quadrangle, possibly degenerate, with lines of size three.
|
A generalized quadrangle has the property that given any line \( \ell \) and a point \( P \) off that line, there is a unique point on \( \ell \) collinear with \( P \). We show that \( \mathcal{Q} \) satisfies this axiom.
Suppose that \( x, y \), and \( a - b - x - y \) are the three points of a line of \( \mathcal{Q} \), and let \( z \) be an arbitrary vector in \( {C}^{ * } \), not equal to any of these three. Then
\[
\langle z, x\rangle + \langle z, y\rangle + \langle z, a - b - x - y\rangle = \langle z, a - b\rangle = 2.
\]
Since each of the three terms is either 0 or 1, it follows that there is a unique term equal to 0, and hence \( z \) is collinear with exactly one of the three points of the line.
Therefore, \( \mathcal{Q} \) is a generalized quadrangle with lines of size three.
|
Theorem 6.16. An operator \( T \) from a separable Hilbert space \( H \) into \( {L}_{2}\left( M\right) \) is a Carleman operator if and only if \( T{f}_{n}\left( x\right) \rightarrow 0 \) almost everywhere in \( M \) for every null-sequence \( \left( {f}_{n}\right) \) from \( D\left( T\right) \) .
Proof. It is evident from the definition that every Carleman operator has this property. It remains to prove the reverse direction. By Theorem 6.15 it is sufficient to show that the series \( {\sum }_{n}{\left| T{e}_{n}\left( x\right) \right| }^{2} \) is almost everywhere convergent for every ONS \( \left\{ {{e}_{1},{e}_{2},\ldots }\right\} \) from \( D\left( T\right) \) . Let \( \left\{ {{e}_{1},{e}_{2},\ldots }\right\} \) be an ONS in \( D\left( T\right) \) . Assume that there exists a measurable subset \( N \subset M \) such that \( \lambda \left( N\right) > 0 \) and \( {\sum }_{n}{\left| T{e}_{n}\left( x\right) \right| }^{2} = \infty \) for \( x \in N \) ( \( \lambda \) stands for Lebesgue measure). For all \( m, l \in \mathbb{N} \) let us define \( {N}_{m, l} \) by the equality
\[
{N}_{m, l} = \left\{ {x \in N : \mathop{\sum }\limits_{{n = 1}}^{l}{\left| T{e}_{n}\left( x\right) \right| }^{2} \geq {m}^{2}}\right\} .
\]
Then \( N = { \cup }_{l \in \mathbb{N}}{N}_{m, l} \) for every \( m \in \mathbb{N} \), and there exists an \( l\left( m\right) \in \mathbb{N} \) such that
\[
\lambda \left( {N}_{m, l\left( m\right) }\right) \geq \left( {1 - {3}^{-m}}\right) \lambda \left( N\right)
\]
Consequently, for \( {N}_{0} = \mathop{\bigcap }\limits_{{m \in \mathbb{N}}}{N}_{m, l\left( m\right) } \) we have
\[
\lambda \left( {N}_{0}\right) \geq \left( {1 - \mathop{\sum }\limits_{{m = 1}}^{\infty }{3}^{-m}}\right) \lambda \left( N\right) > 0.
\]
For all \( m \in \mathbb{N} \) we have
\[
\mathop{\sum }\limits_{{n = 1}}^{{l\left( m\right) }}{\left| T{e}_{n}\left( x\right) \right| }^{2} \geq {m}^{2}\;\text{ for }\;x \in {N}_{0}
\]
By Exercise 6.9, for every \( m \in \mathbb{N} \) there exist finitely many elements \( \left( {x}_{m, j}\right) = \) \( \left( {{\xi }_{m, j,1},\ldots ,{\xi }_{m, j, l\left( m\right) }}\right) \in {\mathbb{C}}^{l\left( m\right) }, j = 1,2,\ldots, p\left( m\right) \) for which we have: \( {\left| {x}_{m, j}\right| }^{2} \) \( = \mathop{\sum }\limits_{{n = 1}}^{{l\left( m\right) }}{\left| {\xi }_{m, j, n}\right| }^{2} \leq 2{m}^{-2} \) and for every \( x = \left( {{\xi }_{1},\ldots ,{\xi }_{l\left( m\right) }}\right) \in {\mathbb{C}}^{l\left( m\right) } \) with \( {\left| x\right| }^{2} \geq \) \( {m}^{2} \) there exists a \( j \in \{ 1,\ldots, p\left( m\right) \} \) for which
\[
\left| {\mathop{\sum }\limits_{{n = 1}}^{{l\left( m\right) }}{\xi }_{m, j, n}{\xi }_{n}}\right| \geq 1
\]
Let us set
\[
{g}_{m, j} = \mathop{\sum }\limits_{{n = 1}}^{{l\left( m\right) }}{\xi }_{m, j, n}{e}_{n}
\]
Then for every \( m \in \mathbb{N} \) and for every \( x \in {N}_{0} \) there exists a \( j \in \{ 1,\ldots, p\left( m\right) \} \) such that
\[
\left| {T{g}_{m, j}\left( x\right) }\right| = \left| {\mathop{\sum }\limits_{{n = 1}}^{{l\left( m\right) }}{\xi }_{m, j, n}T{e}_{n}\left( x\right) }\right| \geq 1.
\]
Thus, for the sequence
\[
\left( {g}_{n}\right) = \left( {{g}_{1,1},{g}_{1,2},\ldots ,{g}_{1, p\left( 1\right) },{g}_{2,1},\ldots ,{g}_{2, p\left( 2\right) },{g}_{3,1},\ldots }\right)
\]
we have: \( {g}_{n} \rightarrow 0 \) and for every \( x \in {N}_{0} \) there exists an arbitrarily large \( n \in \mathbb{N} \) such that \( T{g}_{n}\left( x\right) \geq 1 \) . This contradicts the assumption.
Theorem 6.17. An operator \( T \) from \( {L}_{2}\left( {M}_{1}\right) \) into \( {L}_{2}\left( {M}_{2}\right) \) is a Carleman operator if and only if there exists a measurable function \( K : {M}_{2} \times {M}_{1} \rightarrow \mathbb{C} \) such that \( K\left( {x, \cdot }\right) \in {L}_{2}\left( {M}_{1}\right) \) almost everywhere in \( {M}_{2} \) and
\[
{Tf}\left( x\right) = {\int }_{{M}_{1}}K\left( {x, y}\right) f\left( y\right) \mathrm{d}y\;\text{ almost everywhere in }{M}_{2}, f \in D\left( T\right) .
\]
(6.8)
Such a kernel \( K \) is called a Carleman kernel.
Proof. If \( T \) is induced by a Carleman kernel \( K \) in the sense of (6.8), then the assumption of Theorem 6.14 (Korotkov) is fulfilled with \( \kappa \left( x\right) = \) \( \parallel K\left( {x,}\right) \parallel \) ; so \( T \) is a Carleman operator. If \( T \) is a Carleman operator, then we proceed as in the proof of Theorem 6.14. \( {GT} \) is then a Hilbert-Schmidt operator from \( {L}_{2}\left( {M}_{1}\right) \) into \( {L}_{2}\left( {M}_{2}\right) \) ; therefore, by Theorem 6.11, it is induced by a kernel \( {K}^{\prime } \in {L}_{2}\left( {{M}_{2} \times {M}_{1}}\right) \) . The kernel \( K\left( {x, y}\right) = g{\left( x\right) }^{-1}{K}^{\prime }\left( {x, y}\right) \) is then a Carleman kernel and it induces \( T \) .
Let \( K : M \rightarrow H \) be a measurable function, and let \( {M}_{n} = \{ x \in M : \parallel k\left( x\right) \parallel \) \( < n \) and \( \left| x\right| < n\} \) . Let
\[
D\left( {T}_{k,0}\right) = \left\{ {g \in {L}_{2}\left( M\right) }\right. \text{: there exists an}n \in \mathbb{N}\text{such that}
\]
\[
g\left( x\right) = 0\text{ almost everywhere in }\left. {M \smallsetminus {M}_{n}}\right\} .
\]
For every \( g \in D\left( {T}_{k,0}\right) \) the equality
\[
\left\langle {{T}_{k,0}g, f}\right\rangle = {\int }_{{M}_{n}}{g}^{ * }\left( x\right) \langle k\left( x\right), f\rangle \mathrm{d}x\text{ for all }f \in H
\]
uniquely defines an element \( {T}_{k,0}g \), since because of the inequalities
\[
\left| {{\int }_{{M}_{n}}{g}^{ * }\left( x\right) \langle k\left( x\right), f\rangle \mathrm{d}x}\right| \leq n\parallel f\parallel {\int }_{{M}_{n}}\left| {g\left( x\right) }\right| \mathrm{d}x \leq {n\lambda }{\left( {M}_{n}\right) }^{1/2}\parallel g\parallel \parallel f\parallel
\]
the function \( f \mapsto {\int }_{{M}_{n}}{g}^{ * }\left( x\right) \langle k\left( x\right), f\rangle \mathrm{d}x \) is a continuous linear functional on \( H \) . The mapping \( g \mapsto {T}_{k,0}g \) is obviously linear. \( {T}_{k,0} \) is therefore an operator from \( {L}_{2}\left( M\right) \) into \( H;D\left( {T}_{k,0}\right) \) is dense in \( {L}_{2}\left( M\right) \) . The operator \( {T}_{k,0} \) is called the semi-Carleman operator induced by \( k \) .
Theorem 6.18. We have \( {\left( {T}_{k,0}\right) }^{ * } = {T}_{k} \) . (In what follows we write \( {T}_{k,0}^{ * } \) for \( \left. {{\left( {T}_{k,0}\right) }^{ * } \cdot }\right) \)
Proof. By the definition of \( {T}_{k,0} \) we have for all \( f \in D\left( {T}_{k}\right) \) and \( g \in D\left( {T}_{k,0}\right) \)
\[
\left\langle {g,{T}_{k}f}\right\rangle = \langle g,\langle k\left( \cdot \right), f\rangle \rangle = {\int }_{M}g{\left( x\right) }^{ * }\langle k\left( x\right), f\rangle \mathrm{d}x = \left\langle {{T}_{k,0}g, f}\right\rangle ,
\]
i.e., the operators \( {T}_{k} \) and \( {T}_{k,0} \) are formal adjoints of each other; therefore \( {T}_{k} \subset {T}_{k,0}^{ * } \) . It remains to prove that \( D\left( {T}_{k,0}^{ * }\right) \subset D\left( {T}_{k}\right) \) . Let \( f \in D\left( {T}_{k,0}^{ * }\right) \) . Then
for every \( g \in {L}_{2}\left( {M}_{n}\right) \) and all \( n \in \mathbb{N} \) we have
\[
{\int }_{{M}_{n}}{g}^{ * }\left( x\right) \langle k\left( x\right), f\rangle \mathrm{d}x = \left\langle {{T}_{k,0}g, f}\right\rangle = \left\langle {g,{T}_{k,0}^{ * }f}\right\rangle = {\int }_{{M}_{n}}{g}^{ * }\left( x\right) {T}_{k,0}^{ * }f\left( x\right) \mathrm{d}x.
\]
Consequently,
\[
{\int }_{{M}_{n}}{g}^{ * }\left( x\right) \left\{ {\langle k\left( x\right), f\rangle - {T}_{k,0}^{ * }f\left( x\right) }\right\} \mathrm{d}x = 0\text{ for all }g \in {L}_{2}\left( {M}_{n}\right) .
\]
Because of the relation \( {\left. \left\{ \langle k\left( \cdot \right), f\rangle - {T}_{k,0}^{ * }f\left( \cdot \right) \right\} \right| }_{{M}_{n}} \in {L}_{2}\left( {M}_{n}\right) \) it follows from this that
\[
{T}_{k,0}^{ * }f\left( x\right) = \langle k\left( x\right), f\rangle \;\text{ almost everywhere in }{M}_{n}.
\]
As this holds for all \( n \), it follows that \( \langle k\left( \cdot \right), f\rangle = {T}_{k,0}^{ * }f \in {L}_{2}\left( M\right) \), i.e., \( f \in D\left( {T}_{k}\right) \) .
If \( K : {M}_{2} \times {M}_{1} \rightarrow \mathbb{C} \) is a Carleman kernel and \( k \) denotes the mapping \( k : {M}_{2} \rightarrow {L}_{2}\left( {M}_{1}\right), k\left( x\right) = K\left( {x, \cdot }\right) \), then we write \( {T}_{K} = {T}_{k} \) and \( {T}_{K,0} = {T}_{k,0} \) . It follows from the definition of \( {T}_{k,0} \) (by Fubini’s theorem) that for all \( f \in D\left( {T}_{K,0}\right) \) we have
\[
{T}_{K,0}f\left( x\right) = {\int }_{{M}_{2}}K{\left( y, x\right) }^{ * }f\left( y\right) \mathrm{d}y\;\text{ almost everywhere in }{M}_{1}.
\]
Theorem 6.19. Let \( T \) be a densely defined Carleman operator from \( {L}_{2}\left( {M}_{1}\right) \) into \( {L}_{2}\left( {M}_{2}\right) \) that is induced by the Carleman kernel \( K \) . The adjoint \( {T}^{ * } \) is a Carleman operator if and only if \( {K}^{ + } \) is a Carleman kernel \( \left( {{K}^{ + }\left( {x, y}\right) = }\right. \) \( K{\left( y, x\right) }^{ * } \) for \( \left. {\left( {x, y}\right) \in {M}_{1} \times {M}_{2}}\right) \) and \( \bar{T} \supset {T}_{{K}^{ + },0} \) . Then \( {T}^{ * } \) is induced by \( {K}^{ + } \) .
Proof. By assumption, \( T \subset {T}_{K} \) . As \( T \) is closable, \( D\left( {T}^{ * }\right) \) is dense. If \( {T}^{ * } \) is defined by the Carleman kernel \( H : {M}_{1} \times {M}_{2} \rightarrow \mathbb{C} \), then \( {T}^{ * } \subset {T}_{H} \) . Consequently, \( \bar{T} = {T}^{* * } \supset {T}_{H}^{ * } = \overline{{T}_{H,0}} \supset {T}_{H,0} \) . It remains to prove that \( H\left( {x, y}\right) = \) \( {K}^{ + }\left( {x, y}\right) \) almost everywhere in \( {M}_{1} \times {M}_{2} \) . Let
\[
{M}_{1, n} =
|
Theorem 6.16. An operator \( T \) from a separable Hilbert space \( H \) into \( {L}_{2}\left( M\right) \) is a Carleman operator if and only if \( T{f}_{n}\left( x\right) \rightarrow 0 \) almost everywhere in \( M \) for every null-sequence \( \left( {f}_{n}\right) \) from \( D\left( T\right) \).
|
It is evident from the definition that every Carleman operator has this property. It remains to prove the reverse direction. By Theorem 6.15 it is sufficient to show that the series \( {\sum }_{n}{\left| T{e}_{n}\left( x\right) \right| }^{2} \) is almost everywhere convergent for every ONS \( \left\{ {{e}_{1},{e}_{2},\ldots }\right\} \) from \( D\left( T\right) \). Let \( \left\{ {{e}_{1},{e}_{2},\ldots }\right\} \) be an ONS in \( D\left( T\right) \). Assume that there exists a measurable subset \( N \subset M \) such that \( \lambda \left( N\right) > 0 \) and \( {\sum }_{n}{\left| T{e}_{n}\left( x\right) \right| }^{2} = \infty \) for \( x \in N \) ( \( \lambda \) stands for Lebesgue measure). For all \( m, l \in \mathbb{N} \) let us define \( {N}_{m, l} \) by the equality
\[
{N}_{m, l} = \left\{ {x \in N : \mathop{\sum }\limits_{{n = 1}}^{l}{\left| T{e}_{n}\left( x\right) \right| }^{2} \geq {m}^{2}}\right\} .
\]
Then \( N = { \cup }_{l \in \mathbb{N}}{N}_{m, l} \) for every \( m \in \mathbb{N} \), and there exists an \( l\left( m\right) \in \mathbb{N} \) such that
\[
\lambda \left( {N}_{m, l\left( m\right) }\right) \geq \left( {1 - {3}^{-m}}\right) \lambda \left( N\right)
\]
Consequently, for \( {N}_{0} = \mathop{\bigcap }\limits_{{m \in \mathbb{N}}}{N}_{m, l\left( m\right) } \) we have
\[
\lambda \left( {N}_{0}\right) \geq \left( {1 - \mathop{\sum }\limits_{{m = 1}}^{\infty }{3}^{-m}}\right) \lambda \left( N\right) > 0.
\]
For all \( m \in \mathbb{N} \) we have
\[
\mathop{\sum }\limits_{{n = 1}}^{{l\left( m\right) }}{\left| T{e}_{n}\left( x\right)
|
Corollary 3.3.6. Let \( G \) be a finite group. Then \( G \) is reductive.
## 3.3.2 Casimir Operator
We now introduce the key ingredient for the algebraic proof that the classical groups are reductive. Let \( \mathfrak{g} \) be a semisimple Lie algebra. Fix a Cartan subalgebra \( \mathfrak{h} \) in \( \mathfrak{g} \), let \( \Phi \) be the root system of \( \mathfrak{g} \) with respect to \( \mathfrak{h} \), and fix a set \( {\Phi }^{ + } \) of positive roots in \( \Phi \) . Recall that the Killing form \( B \) on \( \mathfrak{g} \) is nondegenerate by Theorem 2.5.11. The restriction of \( B \) to \( {\mathfrak{h}}_{\mathbb{R}} \) is positive definite and gives inner products and norms, denoted by \( \left( {\cdot , \cdot }\right) \) and \( \parallel \cdot \parallel \), on \( {\mathfrak{h}}_{\mathbb{R}} \) and \( {\mathfrak{h}}_{\mathbb{R}}^{ * } \) .
Fix a basis \( \left\{ {X}_{i}\right\} \) for \( \mathfrak{g} \) and let \( \left\{ {Y}_{i}\right\} \) be the \( B \) -dual basis: \( B\left( {{X}_{i},{Y}_{j}}\right) = {\delta }_{ij} \) for all \( i, j \) . If \( \left( {\pi, V}\right) \) is a representation of \( \mathfrak{g} \) (not necessarily finite-dimensional), we define
\[
{C}_{\pi } = \mathop{\sum }\limits_{i}\pi \left( {X}_{i}\right) \pi \left( {Y}_{i}\right)
\]
(3.35)
This linear transformation on \( V \) is called the Casimir operator of the representation.
Lemma 3.3.7. The Casimir operator is independent of the choice of basis for \( \mathfrak{g} \) and commutes with \( \pi \left( \mathfrak{g}\right) \) .
Proof. We can choose a basis \( \left\{ {Z}_{i}\right\} \) for \( \mathfrak{g} \) such that \( B\left( {{Z}_{i},{Z}_{j}}\right) = {\delta }_{ij} \) . Write \( {X}_{i} = \) \( \mathop{\sum }\limits_{j}B\left( {{X}_{i},{Z}_{j}}\right) {Z}_{j} \) and \( {Y}_{i} = \mathop{\sum }\limits_{k}B\left( {{Y}_{i},{Z}_{k}}\right) {Z}_{k} \) and substitute in the formula for \( {C}_{\pi } \) to obtain
\[
{C}_{\pi } = \mathop{\sum }\limits_{{i, j, k}}B\left( {{X}_{i},{Z}_{j}}\right) B\left( {{Y}_{i},{Z}_{k}}\right) \pi \left( {Z}_{j}\right) \pi \left( {Z}_{k}\right) .
\]
For fixed \( j, j \), the sum over \( i \) on the right side is
\[
B\left( {\mathop{\sum }\limits_{i}B\left( {{X}_{i},{Z}_{j}}\right) {Y}_{i},{Z}_{k}}\right) = B\left( {{Z}_{j},{Z}_{k}}\right) = {\delta }_{jk}.
\]
Hence \( {C}_{\pi } = \mathop{\sum }\limits_{j}\pi {\left( {Z}_{j}\right) }^{2} \), which proves that \( {C}_{\pi } \) does not depend on the choice of basis.
Now let \( Z \in \mathfrak{g} \) . Using the expansion of \( \left\lbrack {Z,{Z}_{i}}\right\rbrack \) terms of the \( B \) -orthonormal basis \( \left\{ {Z}_{j}\right\} \), we can write
\[
\left\lbrack {Z,{Z}_{i}}\right\rbrack = \mathop{\sum }\limits_{j}B\left( {\left\lbrack {Z,{Z}_{i}}\right\rbrack ,{Z}_{j}}\right) {Z}_{j} = \mathop{\sum }\limits_{j}B\left( {Z,\left\lbrack {{Z}_{i},{Z}_{j}}\right\rbrack }\right) {Z}_{j}.
\]
Here we have used the \( \mathfrak{g} \) invariance of \( B \) in the second equation. Since \( \left\lbrack {A,{BC}}\right\rbrack = \) \( \left\lbrack {A, B}\right\rbrack C + B\left\lbrack {A, C}\right\rbrack \) for any \( A, B, C \in \operatorname{End}\left( V\right) \), we can use these expansions to write
\[
\left\lbrack {\pi \left( Z\right) ,{C}_{\pi }}\right\rbrack = \mathop{\sum }\limits_{i}\pi \left( \left\lbrack {Z,{Z}_{i}}\right\rbrack \right) \pi \left( {Z}_{i}\right) + \mathop{\sum }\limits_{j}\pi \left( {Z}_{j}\right) \pi \left( \left\lbrack {Z,{Z}_{j}}\right\rbrack \right)
\]
\[
= \mathop{\sum }\limits_{{i, j}}\left\{ {B\left( {Z,\left\lbrack {{Z}_{i},{Z}_{j}}\right\rbrack }\right) + B\left( {Z,\left\lbrack {{Z}_{j},{Z}_{i}}\right\rbrack }\right) }\right\} \pi {\left( {Z}_{j}\right) }^{2}.
\]
However, this last sum is zero by the skew symmetry of the Lie bracket.
Lemma 3.3.8. Let \( \left( {\pi, V}\right) \) be a highest-weight representation of \( \mathfrak{g} \) with highest weight \( \lambda \) and let \( \rho = \left( {1/2}\right) \mathop{\sum }\limits_{{\alpha \in {\Phi }^{ + }}}\alpha \) . Then the Casimir operator acts on \( V \) as a scalar:
\[
{C}_{\pi }v = \left( {\left( {\lambda + \rho ,\lambda + \rho }\right) - \left( {\rho ,\rho }\right) }\right) v
\]
(3.36)
for all \( v \in V \) .
Proof. Let \( {H}_{1},\ldots ,{H}_{l} \) be an orthonormal basis of \( {\mathfrak{h}}_{\mathbb{R}} \) with respect to \( B \) . Enumerate \( {\Phi }^{ + } = \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{d}}\right\} \) and for \( \alpha \in {\Phi }^{ + } \) fix \( {X}_{\pm \alpha } \in {\mathfrak{g}}_{\pm \alpha } \), normalized such that \( B\left( {{X}_{\alpha },{X}_{-\alpha }}\right) = 1 \) . Then
\[
\left\{ {{H}_{1},\ldots ,{H}_{l},{X}_{{\alpha }_{1}},{X}_{-{\alpha }_{1}},\ldots ,{X}_{{\alpha }_{d}},{X}_{-{\alpha }_{d}}}\right\}
\]
(3.37)
\[
\left\{ {{H}_{1},\ldots ,{H}_{l},{X}_{-{\alpha }_{1}},{X}_{{\alpha }_{1}},\ldots ,{X}_{-{\alpha }_{d}},{X}_{{\alpha }_{d}}}\right\}
\]
are dual bases for \( \mathfrak{g} \) . For the pair of bases (3.37) the Casimir operator is given by
\[
{C}_{\pi } = \mathop{\sum }\limits_{{i = 1}}^{l}\pi {\left( {H}_{i}\right) }^{2} + \mathop{\sum }\limits_{{\alpha \in {\Phi }^{ + }}}\left( {\pi \left( {X}_{\alpha }\right) \pi \left( {X}_{-\alpha }\right) + \pi \left( {X}_{-\alpha }\right) \pi \left( {X}_{\alpha }\right) }\right) .
\]
(3.38)
Let \( {H}_{\rho } \in \mathfrak{h} \) satisfy \( B\left( {{H}_{\rho }, H}\right) = \langle \rho, H\rangle \) for all \( H \in \mathfrak{h} \) . We can rewrite formula (3.38) using the commutation relation \( \pi \left( {X}_{\alpha }\right) \pi \left( {X}_{-\alpha }\right) = \pi \left( {H}_{\alpha }\right) + \pi \left( {X}_{-\alpha }\right) \pi \left( {X}_{\alpha }\right) \) to obtain
\[
{C}_{\pi } = \mathop{\sum }\limits_{{i = 1}}^{l}\pi {\left( {H}_{i}\right) }^{2} + {2\pi }\left( {H}_{\rho }\right) + 2\mathop{\sum }\limits_{{\alpha \in {\Phi }^{ + }}}\pi \left( {X}_{-\alpha }\right) \pi \left( {X}_{\alpha }\right) .
\]
(3.39)
Let \( {v}_{0} \in V\left( \lambda \right) \) be a nonzero highest-weight vector. By formula (3.39),
\[
{C}_{\pi }{v}_{0} = \left( {\mathop{\sum }\limits_{{i = 1}}^{l}\pi {\left( {H}_{i}\right) }^{2} + {2\pi }\left( {H}_{\rho }\right) }\right) {v}_{0} = \left( {\mathop{\sum }\limits_{{i = 1}}^{l}{\left\langle \lambda ,{H}_{i}\right\rangle }^{2} + 2\left\langle {\lambda ,{H}_{\rho }}\right\rangle }\right) {v}_{0}
\]
\[
= \left( {\left( {\lambda ,\lambda }\right) + 2\left( {\rho ,\lambda }\right) }\right) {v}_{0}.
\]
Here we have used the fact that \( \left\{ {H}_{i}\right\} \) is a \( B \) -orthonormal basis for \( {\mathfrak{h}}_{\mathbb{R}} \) to express \( \left( {\lambda ,\lambda }\right) = \mathop{\sum }\limits_{i}\langle \lambda ,{H}_{i}{\rangle }^{2} \) . Now write
\[
\left( {\lambda ,\lambda }\right) + 2\left( {\lambda ,\rho }\right) = \left( {\lambda + \rho ,\lambda + \rho }\right) - \left( {\rho ,\rho }\right)
\]
to see that (3.36) holds when \( v = {v}_{0} \) . Since \( V = U\left( \mathfrak{g}\right) {v}_{0} \) and \( {C}_{\pi } \) commutes with the action of \( U\left( \mathfrak{g}\right) \) by Lemma 3.3.7, it follows that (3.36) also holds for all \( v \in V \) .
The following result is our first application of the Casimir operator.
Proposition 3.3.9. Let \( V \) be a finite-dimensional highest-weight \( \mathfrak{g} \) -module with highest weight \( \lambda \) . Then \( \lambda \) is dominant integral and \( V \) is irreducible. Hence \( V \) is isomorphic to \( {L}^{\lambda } \) .
Proof. The assumption of finite-dimensionality implies that \( \lambda \in P\left( \mathfrak{g}\right) \) by Theorem 3.1.16. Since \( \lambda \) is the maximal weight of \( V \), Theorem 2.3.6 applied to the subal-gebras \( \mathfrak{s}\left( \alpha \right) \) for \( \alpha \in {\Phi }^{ + } \) shows that \( \lambda \) is dominant (as in the proof of Corollary 3.2.3).
Let \( L \) be any nonzero irreducible submodule of \( V \) . By Corollary 3.2.3 we know that \( L \) is a highest-weight module. Let \( \mu \) be the highest weight of \( L \) . Since \( L \subset V \), the Casimir operator \( {C}_{L} \) on \( L \) is the restriction of the Casimir operator \( {C}_{V} \) on \( V \) . Hence Lemma 3.3.8 gives
\[
{C}_{V}w = \left( {\left( {\lambda + \rho ,\lambda + \rho }\right) - \left( {\rho ,\rho }\right) }\right) w = \left( {\left( {\mu + \rho ,\mu + \rho }\right) - \left( {\rho ,\rho }\right) }\right) w
\]
(3.40)
for all \( w \in L \) . Since \( L \neq 0 \), we conclude from equation (3.40) that
\[
\parallel \lambda + \rho {\parallel }^{2} = \parallel \mu + \rho {\parallel }^{2}
\]
(3.41)
Suppose, for the sake of contradiction, that \( \mu \neq \lambda \) . Since \( \mu \in X\left( V\right) \), Lemma 3.2.2 shows that \( \mu = \lambda - \beta \) for some \( 0 \neq \beta \in {Q}_{ + } \), and by Lemma 3.1.21 we have
\[
\left( {\rho ,\beta }\right) = \mathop{\sum }\limits_{{i = 1}}^{l}\left( {{\varpi }_{i},\beta }\right) > 0
\]
We also know that \( \left( {\mu ,\beta }\right) \geq 0 \) and \( \left( {\lambda ,\beta }\right) \geq 0 \), since \( \mu \) and \( \lambda \) are dominant. In particular, \( \left( {\mu + \rho ,\alpha }\right) > 0 \) for all \( \alpha \in {\Phi }^{ + } \), so \( \mu + \rho \neq 0 \) . From these inequalities we obtain
\[
\parallel \mu + \rho {\parallel }^{2} = \left( {\mu + \rho ,\lambda - \beta + \rho }\right)
\]
\[
= \left( {\mu ,\lambda }\right) - \left( {\mu ,\beta }\right) + \left( {\mu ,\rho }\right) + \left( {\rho ,\lambda }\right) - \left( {\rho ,\beta }\right) + \left( {\rho ,\rho }\right)
\]
\[
< \left( {\mu ,\lambda }\right) + \left( {\mu ,\rho }\right) + \left( {\rho ,\lambda }\right) + \left( {\rho ,\rho }\right) = \left( {\mu + \rho ,\lambda + \rho }\right)
\]
\[
\leq \parallel \mu + \rho \parallel \parallel \lambda + \rho \parallel
\]
where we have used the Cauchy-Schwarz inequality to obtain the last inequality. We have proved that \( \parallel \mu + \rho \parallel < \parallel \lambda +
|
Corollary 3.3.6. Let \( G \) be a finite group. Then \( G \) is reductive.
|
null
|
Lemma 3.70. Let \( \Gamma \) be a gallery of type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{d}}\right) \) . If \( \Gamma \) is not minimal, then there is a gallery \( {\Gamma }^{\prime } \) with the same extremities as \( \Gamma \) such that \( {\Gamma }^{\prime } \) has type \( {\mathbf{s}}^{\prime } = \left( {{s}_{1},\ldots ,{\widehat{s}}_{i},\ldots ,{\widehat{s}}_{j},\ldots ,{s}_{d}}\right) \) for some \( i < j \) .
Proof. Since \( \Gamma \) is not minimal, Lemma 3.69 implies that the number of walls separating \( {C}_{0} \) from \( {C}_{d} \) is less than \( d \) . Hence the walls crossed by \( \Gamma \) cannot all be distinct; for if a wall is crossed exactly once by \( \Gamma \), then it certainly separates \( {C}_{0} \) from \( {C}_{d} \) . We can therefore find a root \( \alpha \) and indices \( i, j \), with \( 1 \leq i < j \leq d \), such that \( {C}_{i - 1} \) and \( {C}_{j} \) are in \( \alpha \) but \( {C}_{k} \in - \alpha \) for \( i \leq k < j \) ; see Figure 3.6. Let \( \phi \) be the folding with image \( \alpha \) . If we modify \( \Gamma \) by applying \( \phi \) to the portion \( {C}_{i},\ldots ,{C}_{j - 1} \), we obtain a pregallery with the same extremities that has exactly two repetitions:
\[
{C}_{0},\ldots ,{C}_{i - 1},\phi \left( {C}_{i}\right) ,\ldots ,\phi \left( {C}_{j - 1}\right) ,{C}_{j},\ldots ,{C}_{d}.
\]
So we can delete \( {C}_{i - 1} \) and \( {C}_{j} \) to obtain a gallery \( {\Gamma }^{\prime } \) of length \( d - 2 \) . The type \( {\mathbf{s}}^{\prime } \) of \( {\Gamma }^{\prime } \) is \( \left( {{s}_{1},\ldots ,{\widehat{s}}_{i},\ldots ,{\widehat{s}}_{j},\ldots ,{s}_{d}}\right) \) because \( \phi \) is type-preserving.
Lemma 3.71. The action of \( W \) is simply transitive on the chambers of \( \sum \) .

Fig. 3.6. A geometric proof of the deletion condition.
Proof. We have already noted that the action is transitive. To prove that the stabilizer of \( C \) is trivial, note that if \( {wC} = C \) then \( w \) fixes \( C \) pointwise, since \( w \) is type-preserving. But then \( w = 1 \) by the standard uniqueness argument.
It follows from Lemma 3.71 that we have a bijection \( W \rightarrow \mathcal{C}\left( \sum \right) \) given by \( w \mapsto {wC} \) . This yields the familiar \( 1 - 1 \) correspondence between galleries starting at \( C \) and words \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{d}}\right) \), where the gallery \( \left( {C}_{i}\right) \) corresponding to \( \mathbf{s} \) is given by \( {C}_{i} \mathrel{\text{:=}} {s}_{1}\cdots {s}_{i}C \) for \( i = 0,\ldots, d \) . In view of Lemma 3.68, the type of this gallery is the word \( \mathbf{s} \) that we started with. So a direct translation of Lemma 3.70 into the language of group theory yields the deletion condition for \( \left( {W, S}\right) \) . Consequently:
Lemma 3.72. \( \left( {W, S}\right) \) is a Coxeter system.
Remark 3.73. Another way to prove that \( \left( {W, S}\right) \) is a Coxeter system is to verify condition (A) of Chapter 2 by using the action of \( W \) on the set of roots of \( \sum \) . Indeed, Lemma 3.66 implies that every panel of \( \sum \) is \( W \) -equivalent to a face of \( C \) . Hence every reflection of \( \sum \) is \( W \) -conjugate to an element of \( S \) . This shows that the "reflections" in \( W \), in the sense of Definition 2.1, are precisely the reflections of \( \sum \) obtained from the theory of foldings. We can therefore identify the set \( T \) used in Chapter 2 with the set of reflections of \( \sum \), and we can identify \( T \times \{ \pm 1\} \) with the set of roots of \( \sum \) . The action of \( W \) on the roots therefore yields an action of \( W \) on \( T \times \{ \pm 1\} \) with the properties required for condition (A). Details are left to the interested reader.
For the next lemma, we need a simplicial analogue of the concept of "strict fundamental domain" (Definition 1.103).
Definition 3.74. If a group \( G \) acts on a simplicial complex \( \Delta \), then we call a set of simplices \( {\Delta }^{\prime } \subseteq \Delta \) a simplicial fundamental domain if \( {\Delta }^{\prime } \) is a subcomplex of \( \Delta \) and is a set of representatives for the \( G \) -orbits of simplices.
(This yields a strict fundamental domain \( \left| {\Delta }^{\prime }\right| \) for the action of \( G \) on the geometric realization \( \left| \Delta \right| \) .)
Lemma 3.75. The subcomplex \( \bar{C} \mathrel{\text{:=}} {\sum }_{ \leq C} \) is a simplicial fundamental domain for the action of \( W \) on \( \sum \) . Moreover, the stabilizer of the face of \( C \) of cotype \( J \) is the standard subgroup \( {W}_{J} \) of \( W \) .
Proof. The first assertion follows from the transitivity of \( W \) on the chambers, together with the fact that \( W \) is type-preserving. To prove the second, let \( A \) be a face of \( C \) and let \( \tau \left( A\right) = S \smallsetminus J \) . It follows from the definition of \( \tau \) that \( J \) is the set of elements of \( S \) that fix \( A \) pointwise. In particular, the subgroup \( {W}_{J} \) stabilizes \( A \) . To prove that \( {W}_{J} \) is the full stabilizer, suppose \( {wA} = A \) . We will show by induction on \( l\left( w\right) \) that \( w \in {W}_{J} \) . We may assume \( w \neq 1 \), so we can write \( w = s{w}^{\prime } \) with \( s \in S \) and \( l\left( {w}^{\prime }\right) < l\left( w\right) \) . Our correspondence between words and galleries now implies that there is a minimal gallery of the form \( C,{sC},\ldots ,{wC} \) . By Lemma 3.69, then, the wall \( H \) corresponding to \( s \) separates \( C \) from \( {wC} \) .
Let \( \alpha \) be the root bounded by \( H \) that contains \( C \) . Then \( {wC} \in - \alpha = {s\alpha } \) , so we have \( {w}^{\prime }C \in \alpha \) . The equation \( {wA} = A \) now yields
\[
{w}^{\prime }A = {sA} \in \alpha \cap {s\alpha } = H,
\]
hence \( A \in H \) and \( {w}^{\prime }A = A \) . We therefore have \( s \in J \) [because \( s \) fixes \( A \) pointwise] and \( {w}^{\prime } \in {W}_{J} \) by induction; thus \( w = s{w}^{\prime } \in {W}_{J} \) .
We have now done all the work required to complete the proof of the theorem.
Proof of Theorem 3.65 (end). Recall that we have assumed that every pair of adjacent chambers in \( \sum \) is separated by a wall, and we are trying to prove that \( \sum \) is a Coxeter complex. By Lemma 3.72, we have a Coxeter system \( \left( {W, S}\right) \) , and Lemma 3.75 easily yields an isomorphism \( \sum \cong \sum \left( {W, S}\right) \) . Thus \( \sum \) is a Coxeter complex.
Example 3.76. Let \( \sum \) be the plane tiled by equilateral triangles. It is geometrically evident that we can construct, for any adjacent chambers \( C,{C}^{\prime } \), a folding taking \( {C}^{\prime } \) to \( C \) . So \( \sum \) is indeed a Coxeter complex, as claimed in Example 3.7. To see that the Coxeter group \( W \) is the one given in that example, one can compute the orders of pairwise products of fundamental reflections, or one can observe that the link of every vertex is a hexagon.
The last assertion of Lemma 3.69 is the analogue of a fact that we used many times in Chapter 1, giving two different ways of computing the distance between two chambers. The final result of this section generalizes this to arbitrary simplices. Recall that one can talk about the gallery distance \( d\left( {A, B}\right) \) between arbitrary simplices (Section A.1.3).
Definition 3.77. We say that a wall \( H \) strictly separates two simplices if they are in opposite roots determined by \( H \) and neither is in \( H \) . We denote by \( \mathcal{S}\left( {A, B}\right) \) the set of walls that strictly separate two simplices \( A \) and \( B \) .
Proposition 3.78. For any two simplices \( A, B \) in a Coxeter complex \( \sum \), we have
\[
d\left( {A, B}\right) = \left| {\mathcal{S}\left( {A, B}\right) }\right|
\]
i.e., \( d\left( {A, B}\right) \) is equal to the number of walls \( H \) that strictly separate \( A \) from \( B \) . More precisely, the walls crossed by any minimal gallery from \( A \) to \( B \) are distinct and are precisely the walls in \( \mathcal{S}\left( {A, B}\right) \) .
Proof. A proof from the point of view of the Tits cone was sketched in Section 2.7. Here is a combinatorial proof: Let \( \Gamma : {C}_{0},\ldots ,{C}_{d} \) be a minimal gallery from \( A \) to \( B \) . Then it is also a minimal gallery from \( {C}_{0} \) to \( {C}_{d} \), so it crosses \( d \) distinct walls, and these are the walls separating \( {C}_{0} \) from \( {C}_{d} \) . It is immediate that \( \mathcal{S}\left( {A, B}\right) \subseteq \mathcal{S}\left( {{C}_{0},{C}_{d}}\right) \), so \( \Gamma \) crosses all the walls in \( \mathcal{S}\left( {A, B}\right) \) . We must show, conversely, that every wall \( H \) crossed by \( \Gamma \) is in \( \mathcal{S}\left( {A, B}\right) \) . Suppose not. Then there is a root \( \alpha \) bounded by \( H \) that contains both \( A \) and \( B \) . But then we can get a shorter gallery from \( A \) to \( B \) by applying the folding of \( \sum \) onto \( \alpha \) . This contradicts the minimality of \( \Gamma \) .
We close this section by making some remarks that will be useful later, concerning links. Given a simplex \( A \) in a Coxeter complex \( \sum \), recall that its link \( {\sum }^{\prime } \mathrel{\text{:=}} {\operatorname{lk}}_{\sum }A \) is again a Coxeter complex (Proposition 3.16). We wish to explicitly describe its walls and roots. Suppose \( H \) is a wall of \( \sum \) containing \( A \) , and let \( \pm \alpha \) be the corresponding roots. Then one checks immediately from the definitions that \( {H}^{\prime } \mathrel{\text{:=}} H \cap {\sum }^{\prime } \) is a wall of \( {\sum }^{\prime } \), with associated roots \( \pm {\alpha }^{\prime } \mathrel{\tex
|
Lemma 3.70. Let \( \Gamma \) be a gallery of type \( \mathbf{s} = \left( {{s}_{1},\ldots ,{s}_{d}}\right) \) . If \( \Gamma \) is not minimal, then there is a gallery \( {\Gamma }^{\prime } \) with the same extremities as \( \Gamma \) such that \( {\Gamma }^{\prime } \) has type \( {\mathbf{s}}^{\prime } = \left( {{s}_{1},\ldots ,{\widehat{s}}_{i},\ldots ,{\widehat{s}}_{j},\ldots ,{s}_{d}}\right) \) for some \( i < j \) .
|
Since \( \Gamma \) is not minimal, Lemma 3.69 implies that the number of walls separating \( {C}_{0} \) from \( {C}_{d} \) is less than \( d \) . Hence the walls crossed by \( \Gamma \) cannot all be distinct; for if a wall is crossed exactly once by \( \Gamma \), then it certainly separates \( {C}_{0} \) from \( {C}_{d} \) . We can therefore find a root \( \alpha \) and indices \( i, j \), with \( 1 \leq i < j \leq d \), such that \( {C}_{i - 1} \) and \( {C}_{j} \) are in \( \alpha \) but \( {C}_{k} \in - \alpha \) for \( i \leq k < j \) ; see Figure 3.6. Let \( \phi \) be the folding with image \( \alpha \) . If we modify \( \Gamma \) by applying \( \phi \) to the portion \( {C}_{i},\ldots ,{C}_{j - 1} \), we obtain a pregallery with the same extremities that has exactly two repetitions:
\[
{C}_{0},\ldots ,{C}_{i - 1},\phi \left( {C}_{i}\right) ,\ldots ,\phi \left( {C}_{j - 1}\right) ,{C}_{j},\ldots ,{C}_{d}.
\]
So we can delete \( {C}_{i - 1} \) and \( {C}_{j} \) to obtain a gallery \( {\Gamma }^{\prime } \) of length \( d - 2 \) . The type \( {\mathbf{s}}^{\prime } \) of \( {\Gamma }^{\prime } \) is \( \left( {{s}_{1},\ldots ,{\widehat{s}}_{i},\ldots ,{\widehat{s}}_{j},\ldots ,{s}_{d}}\right) \) because \( \phi \) is type-preserving.
|
Theorem 12.3 (Arzelà-Ascoli) Each sequence from a subset \( U \subset C\left\lbrack {a, b}\right\rbrack \) contains a uniformly convergent subsequence; i.e., \( U \) is relatively sequentially compact, if and only if it is bounded and equicontinuous, i.e., if there exists a constant \( C \) such that
\[
\left| {\varphi \left( x\right) }\right| \leq C
\]
for all \( x \in \left\lbrack {a, b}\right\rbrack \) and all \( \varphi \in U \), and for every \( \varepsilon > 0 \) there exists \( \delta > 0 \) such
that
\[
\left| {\varphi \left( x\right) - \varphi \left( y\right) }\right| < \varepsilon
\]
for all \( x, y \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) and all \( \varphi \in U \) .
Theorem 12.4 The integral operator (12.3) with continuous kernel is a compact operator on \( C\left\lbrack {a, b}\right\rbrack \) .
Proof. For all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) and all \( x \in \left\lbrack {a, b}\right\rbrack \), we have that
\[
\left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq \left( {b - a}\right) \mathop{\max }\limits_{{x, y \in \left\lbrack {a, b}\right\rbrack }}\left| {K\left( {x, y}\right) }\right|
\]
i.e., the set \( U \mathrel{\text{:=}} \{ {A\varphi } : \varphi \in C\left\lbrack {a, b}\right\rbrack ,\parallel \varphi {\parallel }_{\infty } \leq 1\} \subset C\left\lbrack {a, b}\right\rbrack \) is bounded. Since \( K \) is uniformly continuous on the square \( \left\lbrack {a, b}\right\rbrack \times \left\lbrack {a, b}\right\rbrack \), for every \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that
\[
\left| {K\left( {x, z}\right) - K\left( {y, z}\right) }\right| < \frac{\varepsilon }{b - a}
\]
for all \( x, y, z \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) . Then
\[
\left| {\left( {A\varphi }\right) \left( x\right) - \left( {A\varphi }\right) \left( y\right) }\right| = \left| {{\int }_{a}^{b}\left\lbrack {K\left( {x, z}\right) - K\left( {y, z}\right) }\right\rbrack \varphi \left( z\right) {dz}}\right| < \varepsilon
\]
for all \( x, y \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) and all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) ; i.e., \( U \) is equicontinuous. Hence \( A \) is compact by the Arzelà-Ascoli Theorem 12.3.
In our analysis we also will need an explicit expression for the norm of the integral operator \( A \) .
Theorem 12.5 The norm of the integral operator \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) with continuous kernel \( K \) is given by
\[
\parallel A{\parallel }_{\infty } = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}.
\]
(12.4)
Proof. For each \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) we have
\[
\left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq {\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy},\;x \in \left\lbrack {a, b}\right\rbrack ,
\]
and thus
\[
\parallel A{\parallel }_{\infty } = \mathop{\sup }\limits_{{\parallel \varphi {\parallel }_{\infty } \leq 1}}\parallel {A\varphi }{\parallel }_{\infty } \leq \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}.
\]
Since \( K \) is continuous, there exists \( {x}_{0} \in \left\lbrack {a, b}\right\rbrack \) such that
\[
{\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}.
\]
For \( \varepsilon > 0 \) choose \( \psi \in C\left\lbrack {a, b}\right\rbrack \) by setting
\[
\psi \left( y\right) \mathrel{\text{:=}} \frac{K\left( {{x}_{0}, y}\right) }{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon },\;y \in \left\lbrack {a, b}\right\rbrack .
\]
Then \( \parallel \psi {\parallel }_{\infty } \leq 1 \) and
\[
\parallel {A\psi }{\parallel }_{\infty } \geq \left| {\left( {A\psi }\right) \left( {x}_{0}\right) }\right| = {\int }_{a}^{b}\frac{{\left\lbrack K\left( {x}_{0}, y\right) \right\rbrack }^{2}}{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon }{dy} \geq {\int }_{a}^{b}\frac{{\left\lbrack K\left( {x}_{0}, y\right) \right\rbrack }^{2} - {\varepsilon }^{2}}{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon }{dy}
\]
\[
= {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} - \varepsilon \left( {b - a}\right) .
\]
Hence
\[
\parallel A{\parallel }_{\infty } = \mathop{\sup }\limits_{{\parallel \varphi {\parallel }_{\infty } \leq 1}}\parallel {A\varphi }{\parallel }_{\infty } \geq \parallel {A\psi }{\parallel }_{\infty } \geq {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} - \varepsilon \left( {b - a}\right) ,
\]
and since this holds for all \( \varepsilon > 0 \), we have
\[
\parallel A{\parallel }_{\infty } \geq {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}.
\]
This concludes the proof.
It also can be shown that the integral operator remains compact if the kernel \( K \) is merely weakly singular (see [39]). A kernel \( K \) is said to be weakly singular if it is defined and continuous for all \( x, y \in \left\lbrack {a, b}\right\rbrack, x \neq y \) , and there exist positive constants \( M \) and \( \alpha \in (0,1\rbrack \) such that
\[
\left| {K\left( {x, y}\right) }\right| \leq M{\left| x - y\right| }^{\alpha - 1}
\]
for all \( x, y \in \left\lbrack {a, b}\right\rbrack, x \neq y \) .
## 12.2 Operator Approximations
The fundamental concept for approximately solving an operator equation
\[
\varphi - {A\varphi } = f
\]
of the second kind is to replace it by an equation
\[
{\varphi }_{n} - {A}_{n}{\varphi }_{n} = {f}_{n}
\]
with approximating sequences \( {A}_{n} \rightarrow A \) and \( {f}_{n} \rightarrow f \) as \( n \rightarrow \infty \) . For computational purposes, the approximating equations will be chosen such that they can be reduced to solving a system of linear equations. In this section we will provide a convergence and error analysis for such approximation schemes. In particular, we will derive convergence results and error estimates for the cases where we have either norm or pointwise convergence of the sequence \( {A}_{n} \rightarrow A, n \rightarrow \infty \) .
Theorem 12.6 Let \( A : X \rightarrow X \) be a compact linear operator on a Banach space \( X \) such that \( I - A \) is injective. Assume that the sequence \( {A}_{n} : X \rightarrow X \) of bounded linear operators is norm convergent, i.e., \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \) . Then for sufficiently large \( n \) the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} : X \rightarrow X \) exist and are uniformly bounded. For the solutions of the equations
\[
\varphi - {A\varphi } = f\;\text{ and }\;{\varphi }_{n} - {A}_{n}{\varphi }_{n} = {f}_{n}
\]
we have an error estimate
\[
\begin{Vmatrix}{{\varphi }_{n} - \varphi }\end{Vmatrix} \leq C\left\{ {\begin{Vmatrix}{\left( {{A}_{n} - A}\right) \varphi }\end{Vmatrix} + \begin{Vmatrix}{{f}_{n} - f}\end{Vmatrix}}\right\}
\]
(12.5)
for some constant \( C \) .
Proof. By the Riesz Theorem 12.2, the inverse \( {\left( I - A\right) }^{-1} : X \rightarrow X \) exists and is bounded. Since \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \), by Remark 3.25 we have \( \begin{Vmatrix}{{\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) }\end{Vmatrix} \leq q < 1 \) for sufficiently large \( n \) . For these \( n \), by the Neumann series Theorem 3.48, the inverse operators of
\[
I - {\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) = {\left( I - A\right) }^{-1}\left( {I - {A}_{n}}\right)
\]
exist and are uniformly bounded by
\[
\begin{Vmatrix}{\left\lbrack I - {\left( I - A\right) }^{-1}\left( {A}_{n} - A\right) \right\rbrack }^{-1}\end{Vmatrix} \leq \frac{1}{1 - q}.
\]
But then \( {\left\lbrack I - {\left( I - A\right) }^{-1}\left( A - {A}_{n}\right) \right\rbrack }^{-1}{\left( I - A\right) }^{-1} \) are the inverse operators of \( I - {A}_{n} \) and they are uniformly bounded.
The error estimate follows from
\[
\left( {I - {A}_{n}}\right) \left( {{\varphi }_{n} - \varphi }\right) = \left( {A - {A}_{n}}\right) \varphi + {f}_{n} - f
\]
by the uniform boundedness of the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} \) .
In order to develop a similar analysis for the case where the sequence \( \left( {A}_{n}\right) \) is merely pointwise convergent, i.e., \( {A}_{n}\varphi \rightarrow \varphi, n \rightarrow \infty \), for all \( \varphi \), we will have to bridge the gap between norm and pointwise convergence. This goal will be achieved through the concept of collectively compact operator sequences and the following uniform boundedness principle.
Theorem 12.7 Let the sequence \( {A}_{n} : X \rightarrow Y \) of bounded linear operators mapping a Banach space \( X \) into a normed space \( Y \) be pointwise bounded;
i.e., for each \( \varphi \in X \) there exists a positive number \( {C}_{\varphi } \) depending on \( \varphi \) such that \( \begin{Vmatrix}{{A}_{n}\varphi }\end{Vmatrix} \leq {C}_{\varphi } \) for all \( n \in \mathbb{N} \) . Then the sequence \( \left( {A}_{n}\right) \) is uniformly bounded; i.e., there exists some constant \( C \) such that \( \begin{Vmatrix}{A}_{n}\end{Vmatrix} \leq C \) for all \( n \in \mathbb{N} \) .
Proof. In the first step, by an indirect proof we establish that positive constants \( M \) and \( \rho \) and an element \( \psi \in X \) can be chosen such that
\[
\begin{Vmatrix}{{A}_{n}\varphi }\end{Vmatrix} \leq M
\]
(12.6)
for all \( \varphi
|
Theorem 12.4 The integral operator (12.3) with continuous kernel is a compact operator on \( C\left\lbrack {a, b}\right\rbrack \) .
|
For all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) and all \( x \in \left\lbrack {a, b}\right\rbrack \), we have that
\[
\left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq \left( {b - a}\right) \mathop{\max }\limits_{{x, y \in \left\lbrack {a, b}\right\rbrack }}\left| {K\left( {x, y}\right) }\right|
\]
i.e., the set \( U \mathrel{\text{:=}} \{ {A\varphi } : \varphi \in C\left\lbrack {a, b}\right\rbrack ,\parallel \varphi {\parallel }_{\infty } \leq 1\} \subset C\left\lbrack {a, b}\right\rbrack \) is bounded. Since \( K \) is uniformly continuous on the square \( \left\lbrack {a, b}\right\rbrack \times \left\lbrack {a, b}\right\rbrack \), for every \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that
\[
\left| {K\left( {x, z}\right) - K\left( {y, z}\right) }\right| < \frac{\varepsilon }{b - a}
\]
for all \( x, y, z \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) . Then
\[
\left| {\left( {A\varphi }\right) \left( x\right) - \left( {A\varphi }\right) \left( y\right) }\right| = \left| {{\int }_{a}^{b}\left\lbrack {K\left( {x, z}\right) - K\left( {y, z}\right) }\right\rbrack \varphi \left( z\right) {dz}}\right| < \varepsilon
\]
for all \( x, y \in \left\lbrack {a, b}\right\rbrack \) with \( \(\
|
Theorem 13.5. We have
\[
{\pi }_{1}\left( {\mathrm{{GL}}\left( {n,\mathbb{C}}\right) }\right) \cong {\pi }_{1}\left( {\mathrm{U}\left( n\right) }\right) ,\;{\pi }_{1}\left( {\mathrm{{SL}}\left( {n,\mathbb{C}}\right) }\right) \cong {\pi }_{1}\left( {\mathrm{{SU}}\left( n\right) }\right) ,
\]
and
\[
{\pi }_{1}\left( {\mathrm{{SL}}\left( {n,\mathbb{R}}\right) }\right) \cong {\pi }_{1}\left( {\mathrm{{SO}}\left( n\right) }\right)
\]
We have omitted \( \mathrm{{GL}}\left( {n,\mathbb{R}}\right) \) from this list because it is not connected. There is a general principle here: the fundamental group of a connected Lie group is often the same as the fundamental group of a maximal compact subgroup.
Proof. First, let \( G = \mathrm{{GL}}\left( {n,\mathbb{C}}\right), K = \mathrm{U}\left( n\right) \), and \( P \) be the space of positive definite Hermitian matrices. By the Cartan decomposition, multiplication \( K \times \) \( P \rightarrow G \) is a bijection, and in fact, a homeomorphism, so it will follow that \( {\pi }_{1}\left( K\right) \cong {\pi }_{1}\left( G\right) \) if we can show that \( P \) is contractible. However, the exponential map from the space \( \mathfrak{p} \) of Hermitian matrices to \( P \) is bijective (in fact, a homeomorphism) by Proposition 13.7, and the space \( \mathfrak{p} \) is a real vector space and hence contractible.
For \( G = \mathrm{{SL}}\left( {n,\mathbb{C}}\right) \), one argues similarly, with \( K = \mathrm{{SU}}\left( n\right) \) and \( P \) the space of positive definite Hermitian matrices of determinant one. The exponential map from the space \( \mathfrak{p} \) of Hermitian matrices of trace zero is again a homeomorphism of a real vector space onto \( P \) .
Finally, for \( G = \mathrm{{SL}}\left( {n,\mathbb{R}}\right) \), one takes \( K = \mathrm{{SO}}\left( n\right), P \) to be the space of positive definite real matrices of determinant one, and \( \mathfrak{p} \) to be the space of real symmetric matrices of trace zero.
The remainder of this chapter will be less self-contained, but can be skipped with no loss of continuity. We will calculate the fundamental groups of \( \mathrm{{SO}}\left( n\right) \) and \( \mathrm{{SU}}\left( n\right) \), making use of some facts from algebraic topology that we do not prove. (These fundamental groups can alternatively be computed using the method of Chap. 23. See Exercise 23.4.)
If \( G \) is a Hausdorff topological group and \( H \) is a closed subgroup, then the coset space \( G/H \) is a Hausdorff space with the quotient topology. Such a quotient is called a homogeneous space.
Proposition 13.8. Let \( G \) be a Lie group and \( H \) a closed subgroup. If the homogeneous space \( G/H \) is homeomorphic to a sphere \( {S}^{r} \) where \( r \geq 3 \), then \( {\pi }_{1}\left( G\right) \cong {\pi }_{1}\left( H\right) \)
Proof. The map \( G \rightarrow G/H \) is a fibration (Spanier [149], Example 4 on p. 91 and Corollary 14 on p. 96). It follows that there is an exact sequence
\[
{\pi }_{2}\left( {G/H}\right) \rightarrow {\pi }_{1}\left( H\right) \rightarrow {\pi }_{1}\left( G\right) \rightarrow {\pi }_{1}\left( {G/H}\right)
\]
(Spanier [149], Theorem 10 on p. 377). Since \( G/H \) is a sphere of dimension \( \geq 3 \), its first and second homotopy groups are trivial and the result follows.
Theorem 13.6. The groups \( \mathrm{{SU}}\left( n\right) \) are simply connected for all \( n \) . On the other hand,
\[
{\pi }_{1}\left( {\mathrm{{SO}}\left( n\right) }\right) \cong \left\{ \begin{matrix} \mathbb{Z} & \text{ if }n = 2, \\ \mathbb{Z}/2\mathbb{Z} & \text{ if }n > 2. \end{matrix}\right.
\]
Proof. Since \( \mathrm{{SO}}\left( 2\right) \) is a circle, its fundamental group is \( \mathbb{Z} \) . By Proposition 13.6 \( {\pi }_{1}\left( {\mathrm{{SO}}\left( 3\right) }\right) \cong \mathbb{Z}/2\mathbb{Z} \) and \( {\pi }_{1}\left( {\mathrm{{SU}}\left( 2\right) }\right) \) is trivial. The group \( \mathrm{{SO}}\left( n\right) \) acts transitively on the unit sphere \( {S}^{n - 1} \) in \( {\mathbb{R}}^{n} \), and the isotropy subgroup is \( \mathrm{{SO}}\left( {n - 1}\right) \), so \( \mathrm{{SO}}\left( n\right) /\mathrm{{SO}}\left( {n - 1}\right) \) is homeomorphic to \( {S}^{n - 1} \) . By Proposition 13.8, we see that \( {\pi }_{1}\left( {\mathrm{{SO}}\left( n\right) }\right) \cong {\pi }_{1}\left( {\mathrm{{SO}}\left( {n - 1}\right) }\right) \) if \( n \geq 4 \) . Similarly, \( \mathrm{{SU}}\left( n\right) \) acts on the unit sphere \( {S}^{{2n} - 1} \) in \( {\mathbb{C}}^{n} \), and so \( \mathrm{{SU}}\left( n\right) /\mathrm{{SU}}\left( {n - 1}\right) \cong {S}^{{2n} - 1} \), whence \( \mathrm{{SU}}\left( n\right) \cong \mathrm{{SU}}\left( {n - 1}\right) \) for \( n \geq 2 \) .
If \( n \geq \), the universal covering group of \( \mathrm{{SO}}\left( n\right) \) is called the spin group and is denoted \( \operatorname{Spin}\left( n\right) \) . We will take a closer look at it in Chap. 31.
## Exercises
Exercise 13.1. Let \( \widetilde{\mathrm{{SL}}}\left( {2,\mathbb{R}}\right) \) be the universal covering group of \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) . Let \( \pi \) : \( \widetilde{\mathrm{{SL}}}\left( {2,\mathbb{R}}\right) \rightarrow \mathrm{{GL}}\left( V\right) \) be any finite-dimensional irreducible representation. Show that \( \pi \) factors through \( \mathrm{{SL}}\left( {2,\mathbb{R}}\right) \) and is hence not a faithful representation. (Hint: Use Exercise 12.2.)
## The Local Frobenius Theorem
Let \( M \) be an \( n \) -dimensional smooth manifold. The tangent bundle \( {TM} \) of \( M \) is the disjoint union of all tangent spaces of points of \( M \) . It can be given the structure of a manifold of dimension \( 2\dim \left( M\right) \) as follows. If \( U \) is a coordinate neighborhood and \( {x}_{1},\ldots ,{x}_{n} \) are local coordinates on \( U \), then \( T\left( U\right) = \left\{ {{T}_{x}M \mid x \in U}\right\} \) can be taken to be a coordinate neighborhood of \( {TM} \) . Every element of \( {T}_{x}M \) with \( x \in U \) can be written uniquely as
\[
\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}\frac{\partial }{\partial {x}_{i}}
\]
and mapping this tangent vector to \( \left( {{x}_{1},\ldots ,{x}_{n},{a}_{1},\ldots ,{a}_{n}}\right) \in {\mathbb{R}}^{2n} \) gives a chart on \( T\left( U\right) \), making \( {TM} \) into a manifold.
By a \( d \) -dimensional family \( D \) in the tangent bundle of \( M \) we mean a rule that associates with each \( x \in M \) a \( d \) -dimensional subspace \( {D}_{x} \subset {T}_{x}\left( M\right) \) . We ask that the family be smooth. By this we mean that in a neighborhood \( U \) of any given point \( x \) there are smooth vector fields \( {X}_{1},\ldots ,{X}_{d} \) such that for \( u \in U \) the vectors \( {X}_{i, u} \in {T}_{u}\left( M\right) \) span \( {D}_{u} \) .
We say that a vector field \( X \) is subordinate to the family \( D \) if \( {X}_{x} \in {D}_{x} \) for all \( x \in U \) . The family is called involutory if whenever \( X \) and \( Y \) are vector fields subordinate to \( D \) then so is \( \left\lbrack {X, Y}\right\rbrack \) . This definition is motivated by the following considerations.
An integral manifold of the family \( D \) is a \( d \) -dimensional submanifold \( N \) such that, for each point \( x \in N \), the tangent space \( {T}_{x}\left( N\right) \), identified with its image in \( {T}_{x}\left( M\right) \), is \( {D}_{x} \) . We may ask whether it is possible, at least locally in a neighborhood of every point, to pass an integral manifold. This is surely a natural question.
Let us observe that if it is true, then the family \( D \) is involutory. To see this (at least plausibly), let \( U \) be an open set in \( M \) that is small enough that through each point in \( U \) there is an integral submanifold that is closed in \( U \) . Let \( J \) be the subspace of \( {C}^{\infty }\left( U\right) \) consisting of functions that are constant on these integral submanifolds. Then the restriction of a vector field \( X \) to \( U \) is subordinate to \( D \) if and only if it annihilates \( J \) . It is clear from (6.6) that if \( X \) and \( Y \) have this property, then so does \( \left\lbrack {X, Y}\right\rbrack \) .
The Frobenius theorem is a converse to this observation. A global version may be found in Chevalley [35]. We will content ourselves with the local theorem.
Lemma 14.1. If \( {X}_{1},\ldots ,{X}_{d} \) are vector fields on \( M \) such that \( \left\lbrack {{X}_{i},{X}_{j}}\right\rbrack \) lies in the \( {C}^{\infty }\left( M\right) \) span of \( {X}_{1},\ldots ,{X}_{d} \), and if for each \( x \in M \) we define \( {D}_{x} \) to be the span of \( {X}_{1x},\ldots ,{X}_{dx} \), then \( D \) is an involutory family.
Proof. Any vector field subordinate to \( D \) has the form (locally near \( x \) ) \( \mathop{\sum }\limits_{i}{f}_{i}{X}_{i} \), where \( {f}_{i} \) are smooth functions. To check that the commutator of two such vector fields is also of the same form amounts to using the formula
\[
\left\lbrack {{fX},{gY}}\right\rbrack = {fg}\left\lbrack {X, Y}\right\rbrack + {fX}\left( g\right) Y - {gY}\left( f\right) X
\]
which follows easily on applying both sides to a function \( h \) and using the fact that \( X \) and \( Y \) are derivations of \( {C}^{\infty }\left( M\right) \) .
Theorem 14.1 (Frobenius). Let \( D \) be a smooth involutory \( d \) -dimensional family in the tangent bundle of \( M \) . Then for each point \( x \in M \) there exists a neighborhood \( U \) of \( x \) and an integral manifold \( N \) of \( D \) through \( x \) in \( U \) . If \( {N}^{\prime } \) is another integral manifold through \( x \), then \( N \) and \( {N}^{\prime } \) coincide near \( x \) . That is, there exists a neighborhood \( V \) of \( x \) such that \( V \cap N = V \cap {N}^{\prime } \) .
Proof. Since this is a strictly local statement, it is sufficient to prove this when \( M \) is an open set in \( {\mat
|
Theorem 13.5. We have
\[
{\pi }_{1}\left( {\mathrm{{GL}}\left( {n,\mathbb{C}}\right) }\right) \cong {\pi }_{1}\left( {\mathrm{U}\left( n\right) }\right) ,\;{\pi }_{1}\left( {\mathrm{{SL}}\left( {n,\mathbb{C}}\right) }\right) \cong {\pi }_{1}\left( {\mathrm{{SU}}\left( n\right) }\right) ,
\]
and
\[
{\pi }_{1}\left( {\mathrm{{SL}}\left( {n,\mathbb{R}}\right) }\right) \cong {\pi }_{1}\left( {\mathrm{{SO}}\left( n\right) }\right)
\]
|
First, let \( G = \mathrm{{GL}}\left( {n,\mathbb{C}}\right), K = \mathrm{U}\left( n\right) \), and \( P \) be the space of positive definite Hermitian matrices. By the Cartan decomposition, multiplication \( K \times \) \( P \rightarrow G \) is a bijection, and in fact, a homeomorphism, so it will follow that \( {\pi }_{1}\left( K\right) \cong {\pi }_{1}\left( G\right) \) if we can show that \( P \) is contractible. However, the exponential map from the space \( \mathfrak{p} \) of Hermitian matrices to \( P \) is bijective (in fact, a homeomorphism) by Proposition 13.7, and the space \( \mathfrak{p} \) is a real vector space and hence contractible.
For \( G = \mathrm{{SL}}\left( {n,\mathbb{C}}\right) \), one argues similarly, with \( K = \mathrm{{SU}}\left( n\right) \) and \( P \) the space of positive definite Hermitian matrices of determinant one. The exponential map from the space \( \mathfrak{p} \) of Hermitian matrices of trace zero is again a homeomorphism of a real vector space onto \( P \) .
Finally, for \( G = \mathrm{{SL}}\left( {n,\mathbb{R}}\right) \), one takes \( K = \mathrm{{SO}}\left( n\right), P \) to be the space of positive definite real matrices of determinant one, and \( \mathfrak{p} \) to be the space of real symmetric matrices of trace zero.
|
Example 2.1.2 Let \( X = {\mathbb{R}}^{\mathbb{N}}, x = \left( {{x}_{0},{x}_{1},\ldots }\right) \) and \( y = \left( {{y}_{0},{y}_{1},\ldots }\right) \) . Define
\[
d\left( {x, y}\right) = \mathop{\sum }\limits_{n}\frac{1}{{2}^{n + 1}}\min \left\{ {\left| {{x}_{n} - {y}_{n}}\right| ,1}\right\} .
\]
Then \( d \) is a metric on \( {\mathbb{R}}^{\mathbb{N}} \) .
Example 2.1.3 If \( X \) is any set and
\[
d\left( {x, y}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }x = y \\ 1 & \text{ otherwise } \end{array}\right.
\]
then \( d \) defines a metric on \( X \), called the discrete metric.
Example 2.1.4 Let \( \left( {{X}_{0},{d}_{0}}\right) ,\left( {{X}_{1},{d}_{1}}\right) ,\left( {{X}_{2},{d}_{2}}\right) ,\ldots \) be metric spaces and \( X = \mathop{\prod }\limits_{n}{X}_{n} \) . Fix \( x = \left( {{x}_{0},{x}_{1},\ldots }\right) \) and \( y = \left( {{y}_{0},{y}_{1},\ldots }\right) \) in \( X \) . Define
\[
d\left( {x, y}\right) = \mathop{\sum }\limits_{n}\frac{1}{{2}^{n + 1}}\min \left\{ {{d}_{n}\left( {{x}_{n},{y}_{n}}\right) ,1}\right\} .
\]
Then \( d \) is a metric on \( X \), which we shall call the product metric.
Note that if \( \left( {X, d}\right) \) is a metric space and \( Y \subseteq X \), then \( d \) resticted to \( Y \) (in fact to \( Y \times Y \) ) is itself a metric. Thus we can think of a subset of a metric space as a metric space itself and call it a subspace of \( X \) . Let \( \left( {X, d}\right) \) be a metric space, \( x \in X \), and \( r > 0 \) . We put
\[
B\left( {x, r}\right) = \{ y \in X : d\left( {x, y}\right) < r\}
\]
and call it the open ball with center \( x \) and radius \( r \) . The set
\[
\{ y \in X : d\left( {x, y}\right) \leq r\}
\]
will be called the closed ball with center \( x \) and radius \( r \) . Let \( \mathcal{T} \) be the set of all subsets \( U \) of \( X \) such that \( U \) is the union of a family (empty or otherwise) of open balls in \( X \) . Thus, \( U \in \mathcal{T} \) if and only if for every \( x \) in \( U \) , there exists an \( r > 0 \) such that \( B\left( {x, r}\right) \subseteq U \) . Clearly,
(i) \( \varnothing, X \in \mathcal{T} \) ,
(ii) \( \mathcal{T} \) is closed under arbitrary unions, i.e., for all \( \left\{ {{U}_{i} : i \in I}\right\} \subseteq \mathcal{T} \) , \( \mathop{\bigcup }\limits_{i}{U}_{i} \in \mathcal{T} \), and
(iii) \( \mathcal{T} \) is closed under finite intersections.
To see (iii), take two open balls \( B\left( {x, r}\right) \) and \( B\left( {y, s}\right) \) in \( X \) . Let \( z \in \) \( B\left( {x, r}\right) \cap B\left( {y, s}\right) \) . Take any \( t \) such that \( 0 < t < \min \{ r - d\left( {x, z}\right), s - d\left( {y, z}\right) \} \) . By the triangle inequality we see that
\[
z \in B\left( {z, t}\right) \subseteq B\left( {x, r}\right) \bigcap B\left( {y, s}\right) .
\]
It follows that the intersection of any two open balls is in \( \mathcal{T} \) . It is quite easy to see now that \( \mathcal{T} \) is closed under finite intersections.
Any family \( \mathcal{T} \) of subsets of a set \( X \) satisfying (i),(ii), and (iii) is called a topology on \( X \) ; the set \( X \) itself will be called a topological space. Sets in \( \mathcal{T} \) are called open. The family \( \mathcal{T} \) described above is called the topology induced by or the topology compatible with \( d \) . Most of the results on metric spaces that we need depend only on the topologies induced by their metrics. A topological space whose topology is induced by a metric is called a metrizable space. Note that the topology induced by the discrete metric on a set \( X \) (2.1.3) consists of all subsets of \( X \) . We call this topology the discrete topology on \( X \) .
Exercise 2.1.5 Show that both the metrics \( {d}_{1} \) and \( {d}_{2} \) on \( {\mathbb{R}}^{n} \) defined in 2.1.1 induce the same topology. This topology is called the usual topology.
Another such example is obtained as follows. Let \( d \) be a metric on \( X \) and
\[
\rho \left( {x, y}\right) = \min \{ d\left( {x, y}\right) ,1\} ,\;x, y \in X.
\]
Then both \( d \) and \( \rho \) induce the same topology on \( X \) . These examples show that a topology may be induced by more than one metric. Two metrics \( d \) and \( \rho \) on a set are called equivalent if they induce the same topology.
Exercise 2.1.6 Show that two metrics \( d \) and \( \rho \) on a set \( X \) are equivalent if and only if for every sequence \( \left( {x}_{n}\right) \) in \( X \) and every \( x \in X \) ,
\[
d\left( {{x}_{n}, x}\right) \rightarrow 0 \Leftrightarrow \rho \left( {{x}_{n}, x}\right) \rightarrow 0.
\]
Exercise 2.1.7 (i) Show that the intersection of any family of topologies on a set \( X \) is a topology.
(ii) Let \( \mathcal{G} \subseteq \mathcal{P}\left( X\right) \) . Show that there is a topology \( \mathcal{T} \) on \( X \) containing \( \mathcal{G} \) such that if \( {\mathcal{T}}^{\prime } \) is any topology containing \( \mathcal{G} \), then \( \mathcal{T} \subseteq {\mathcal{T}}^{\prime } \) .
If \( \mathcal{G} \) and \( \mathcal{T} \) are as in (ii), then we say that \( \mathcal{G} \) generates \( \mathcal{T} \) or that \( \mathcal{G} \) is a subbase for \( \mathcal{T} \) . A base for a topology \( \mathcal{T} \) on \( X \) is a family \( \mathcal{B} \) of sets in \( \mathcal{T} \) such that every \( U \in \mathcal{T} \) is a union of elements in \( \mathcal{B} \) . It is easy to check that if \( \mathcal{G} \) is a subbase for a topology \( \mathcal{T} \), then \( {\mathcal{G}}_{d} \), the family of finite intersections of elements of \( \mathcal{G} \), is a base for \( \mathcal{T} \) . The set of all open balls of a metric space \( \left( {X, d}\right) \) is a base for the topology on \( X \) induced by \( d \) . For any \( X,\{ \{ x\} : x \in X\} \) is a base for the discrete topology on \( X \) . A topological space \( X \) is called second countable if it has a countable base.
Exercise 2.1.8 Let \( \left( {X,\mathcal{T}}\right) \) have a countable subbase. Show that it is second countable.
A set \( D \subseteq X \) is called dense in \( X \) if \( U \cap D \neq \varnothing \) for every nonempty open set \( U \), or equivalently, \( D \) intersects every nonempty open set in some fixed base \( \mathcal{B} \) . The set of rationals \( \mathbb{Q} \) is dense in \( \mathbb{R} \), and \( {\mathbb{Q}}^{n} \) is dense in \( {\mathbb{R}}^{n} \) . A topological space \( X \) is called separable if it has a countable dense set. Let \( X \) be second countable and \( \left\{ {{U}_{n} : n \in \mathbb{N}}\right\} \) a countable base with all \( {U}_{n} \) ’s nonempty. Choose \( {x}_{n} \in {U}_{n} \) . Clearly, \( \left\{ {{x}_{n} : n \in \mathbb{N}}\right\} \) is dense. On the other hand, let \( \left( {X, d}\right) \) be a separable metric space and \( \left\{ {{x}_{n} : n \in \mathbb{N}}\right\} \) a countable dense set in \( X \) . Then
\[
\mathcal{B} = \left\{ {B\left( {{x}_{n}, r}\right) : r \in \mathbb{Q}, r > 0\& n \in \mathbb{N}}\right\}
\]
is a countable base for \( X \) . We have proved the following proposition.
Proposition 2.1.9 A metrizable space is separable if and only if it is second countable.
A subspace of a second countable space is clearly second countable. It follows that a subspace of a separable metric space is separable.
A subset \( F \) of a topological space \( X \) is called closed if \( X \smallsetminus F \) is open. For any \( A \subseteq X,\operatorname{cl}\left( A\right) \) will denote the intersection of all closed sets containing \( A \) . Thus \( \operatorname{cl}\left( A\right) \) is the smallest closed set containing \( A \) and is called the closure of \( A \) . Note that \( D \subseteq X \) is dense if and only if \( \operatorname{cl}\left( D\right) = X \) . The largest open set contained in \( A \), denoted by \( \operatorname{int}\left( A\right) \), will be called the interior of \( A \) . A set \( A \) such that \( x \in \operatorname{int}\left( A\right) \) is called a neighborhood of \( x \) .
Exercise 2.1.10 For any \( A \subseteq X, X \) a topological space, show that
\[
X \smallsetminus \operatorname{cl}\left( A\right) = \operatorname{int}\left( {X \smallsetminus A}\right) .
\]
Let \( \left( {X, d}\right) \) be a metric space, \( \left( {x}_{n}\right) \) a sequence in \( X \), and \( x \in X \) . We say that \( \left( {x}_{n}\right) \) converges to \( x \), written \( {x}_{n} \rightarrow x \) or \( \lim {x}_{n} = x \), if \( d\left( {{x}_{n}, x}\right) \rightarrow 0 \) as \( n \rightarrow \infty \) . Such an \( x \) is called the limit of \( \left( {x}_{n}\right) \) . Note that a sequence can have at most one limit. Let \( x \in X \) . We call \( x \) an accumulation point of \( A \subseteq X \) if every neighborhood of \( x \) contains a point of \( A \) other than \( x \) . Note that \( x \) is an accumulation point of \( A \) if and only if there is a sequence \( \left( {x}_{n}\right) \) of distinct elements in \( A \) converging to \( x \) . The set of all accumulation points of \( A \) is called the derived set, or simply the derivative, of \( A \) . It will be denoted by \( {A}^{\prime } \) . The elements of \( A \smallsetminus {A}^{\prime } \) are called the isolated points of \( A \) . So, \( x \) is an isolated point of \( A \) if and only if there is an open set \( U \) such that \( A \cap U = \{ x\} \) . A set \( A \subseteq X \) is called dense-in-itself if it is nonempty and has no isolated point.
Exercise 2.1.11 Let \( A \subseteq X, X \) metrizable. Show the following.
(i) The set \( A \) is closed if and only if the limit of any sequence in \( A \) belongs to \( A \) .
(ii) The set \( A \) is open if and only if for any sequence \( \left( {x}_{n}\right) \) converging to a point in \( A \), there exists an integer \( M \geq 0 \) such that \( {x}_{n} \in A \) for all \( n \geq M \) .
(iii) \( \operatorname{cl}\left( A\right) = A\bigcup {A}^{\prime } \) .
Proposition 2.1.12 Let \(
|
Example 2.1.2 Let \( X = {\mathbb{R}}^{\mathbb{N}}, x = \left( {{x}_{0},{x}_{1},\ldots }\right) \) and \( y = \left( {{y}_{0},{y}_{1},\ldots }\right) \) . Define
\[
d\left( {x, y}\right) = \mathop{\sum }\limits_{n}\frac{1}{{2}^{n + 1}}\min \left\{ {\left| {{x}_{n} - {y}_{n}}\right| ,1}\right\} .
\]
Then \( d \) is a metric on \( {\mathbb{R}}^{\mathbb{N}} \) .
|
To show that \( d \) is a metric on \( {\mathbb{R}}^{\mathbb{N}} \), we need to verify the following properties:
1. **Non-negativity**: \( d(x, y) \geq 0 \) for all \( x, y \in {\mathbb{R}}^{\mathbb{N}} \).
2. **Identity of indiscernibles**: \( d(x, y) = 0 \) if and only if \( x = y \).
3. **Symmetry**: \( d(x, y) = d(y, x) \) for all \( x, y \in {\mathbb{R}}^{\mathbb{N}} \).
4. **Triangle inequality**: \( d(x, z) \leq d(x, y) + d(y, z) \) for all \( x, y, z \in {\mathbb{R}}^{\mathbb{N}} \).
**Non-negativity**:
Since \( \min \{ |x_n - y_n|, 1 \} \geq 0 \) for all \( n \), and the series is the sum of non-negative terms, it follows that \( d(x, y) \geq 0 \).
**Identity of indiscernibles**:
If \( x = y \), then \( x_n = y_n \) for all \( n \), so \( |x_n - y_n| = 0 \) and thus \( \min \{ |x_n - y_n|, 1 \} = 0 \) for all \( n \). Therefore, \( d(x, y) = \sum_{n} \frac{1}{2^{n+1}} \cdot 0 = 0 \).
Conversely, if \( d(x, y) = 0 \), then \(\sum_{n} \frac{1}{2^{n+1}} \min \{ |x_n - y_n|, 1 \} = 0\). Since all terms in the series are non-negative and the series sums to zero, it must be that each term is zero: \(\frac{1}{2^{n+1}} \min \{ |x_n - y_n|, 1 \} = 0\) for all \( n \). This implies \(\min \{ |x_n - y_n|, 1 \} = 0\) for all \( n \), which in turn implies \( |x_n - y_n| = 0 \) for all \( n \), hence \( x_n = y_n \) for all \( n \) and thus \( x = y \).
**Symmetry**:
Since \( |x_n - y_n| = |y_n - x_n| \) for all \( n \), it follows that \(\min \{ |x_n - y_n|, 1 \} = \(\min \{ |y_n - x_n|, 1 \} \) for all \( n \). Therefore,
\[ d(x, y) = \sum_{n} \(\frac{1}{2^{n+1}} \(\min \{ |x_n - y_n|, 1 \} ) = \(\sum_{
|
Corollary 3.3.4. A point \( x \in X \) belongs to the Shilov boundary of \( A \) if and only if given any open neighbourhood \( U \) of \( x \), there exists \( f \in A \) such that
\[
\parallel f{\left| {}_{X \smallsetminus U}{\parallel }_{\infty } < \parallel f\right| }_{U}{\parallel }_{\infty }
\]
Proof. First, let \( x \in X \smallsetminus \partial \left( A\right) \) . Then \( U = X \smallsetminus \partial \left( A\right) \) is an open neighbourhood of \( x \) and because \( \partial \left( A\right) \) is a boundary, we have for all \( f \in A \) ,
\[
\parallel f{\left| {}_{U}{\parallel }_{\infty } \leq \parallel f{\parallel }_{\infty } = \parallel f\right| }_{\partial \left( A\right) }{\parallel }_{\infty } = {\begin{Vmatrix}{\left. f\right| }_{X \smallsetminus U}\end{Vmatrix}}_{\infty }.
\]
Conversely, let \( x \in \partial \left( A\right) \) and suppose there exists an open neighbourhood \( U \) of \( x \) such that
\[
\parallel f{\left| U\right| }_{\infty } \leq \parallel f{\left| {}_{X \smallsetminus U}\right| }_{\infty }
\]
for all \( f \in A \) . Then \( X \smallsetminus U \) is a boundary for \( A \), so that \( \partial \left( A\right) \subseteq X \smallsetminus U \) . This contradicts \( x \in \partial \left( A\right) \) .
We now examine a number of examples.
Example 3.3.5. (1) Let \( X \) be a locally compact Hausdorff space and let \( A \) be a subalgebra of \( {C}_{0}\left( X\right) \) with the property that given any closed subset \( E \) of \( X \) and \( x \in X \smallsetminus E \), there exists \( f \in A \) such that \( f\left( x\right) \neq 0 \) and \( {\left. f\right| }_{E} = 0 \) . It is obvious that then \( \partial \left( A\right) = X \) . Hence, in particular, \( \partial \left( {{C}_{0}\left( X\right) }\right) = X \) .
(2) Let \( X \) be a compact subset of \( \mathbb{C} \) . We claim that \( \partial \left( {R\left( X\right) }\right) \) coincides with \( \partial \left( X\right) \), the topological boundary of \( X \) . Notice first that since each \( f \in R\left( X\right) \) is holomorphic on the interior \( {X}^{ \circ } \) of \( X \), it follows from the maximum modulus principle that \( \partial \left( X\right) = X \smallsetminus {X}^{ \circ } \) is a boundary for \( R\left( X\right) \) .
To see that conversely \( \partial \left( {R\left( X\right) }\right) \) contains \( \partial \left( X\right) \), we have to verify that every point in \( \partial \left( X\right) \) fulfills the condition in Corollary 3.3.4. Thus, let \( {z}_{0} \in \partial \left( X\right) \), and let \( U \) be an open neighbourhood of \( {z}_{0} \) in \( X \) . Choose an open disc \( V \) of radius \( r > 0 \) around \( {z}_{0} \) so that \( V \cap X \subseteq U \) and pick \( {z}_{1} \in V \smallsetminus X \) with \( \left| {{z}_{1} - {z}_{0}}\right| < r/2 \) . Let \( f \in R\left( X\right) \) be the function defined by
\[
f\left( z\right) = \frac{1}{z - {z}_{1}},\;z \in X.
\]
Then, for \( x \in X \smallsetminus U,\left| {z - {z}_{0}}\right| \geq r \) and hence
\[
\left| {z - {z}_{1}}\right| \geq \left| {z - {z}_{0}}\right| - \left| {{z}_{1} - {z}_{0}}\right| > \frac{r}{2}.
\]
It follows that \( {\begin{Vmatrix}{\left. f\right| }_{X \smallsetminus U}\end{Vmatrix}}_{\infty } \leq 2/r \) . On the other hand,
\[
\parallel f{\left| {}_{U}{\parallel }_{\infty } \geq \frac{1}{\left| {z}_{1} - {z}_{0}\right| } > \frac{2}{r}\right| }^{2/2}
\]
So \( {z}_{0} \) satisfies the hypothesis in Corollary 3.3.4.
(3) Continue to let \( X \) be a compact subset of \( \mathbb{C} \) . Then \( \partial \left( {P\left( X\right) }\right) \) equals the topological boundary of the unbounded component of \( \mathbb{C} \smallsetminus X \) . To show this, assume first that \( X \) is polynomially convex. Then \( \mathbb{C} \smallsetminus X \) is connected (Theorem 2.3.7) and \( P\left( X\right) = R\left( X\right) \) by Theorem 2.5.8. Therefore, example (2) yields \( \partial \left( {P\left( X\right) }\right) = \partial \left( {R\left( X\right) }\right) = \partial \left( X\right) = \partial \left( {\mathbb{C} \smallsetminus X}\right) \) .
Now, for arbitrary \( X, P\left( X\right) \) is isometrically isomorphic to \( P\left( {\widehat{X}}_{p}\right) \) (Theorem 2.5.7). Hence every boundary for \( P\left( X\right) \) is a boundary for \( P\left( {\widehat{X}}_{p}\right) \) . By the preceding paragraph we obtain
\[
\partial \left( {P\left( X\right) }\right) = \partial \left( {P\left( {\widehat{X}}_{p}\right) }\right) = \partial \left( {\mathbb{C} \smallsetminus {\widehat{X}}_{p}}\right) .
\]
However, \( \mathbb{C} \smallsetminus {\widehat{X}}_{p} \) coincides with the unbounded component \( C \) of \( \mathbb{C} \smallsetminus X \) . Indeed, \( \mathbb{C} \smallsetminus C \) is a compact subset of \( \mathbb{C} \) with connected complement and hence is polynomially convex. As \( X \subseteq \mathbb{C} \smallsetminus C \subseteq {\widehat{X}}_{p} \), we get \( \mathbb{C} \smallsetminus C = {\widehat{X}}_{p} \) .
(4) The description of \( \partial \left( {P\left( X\right) }\right) \) in (3) does not remain true for compact subsets \( X \) of \( {\mathbb{C}}^{n} \) when \( n \geq 2 \) . To demonstrate this we show that \( \partial \left( {P\left( {\mathbb{D}}^{n}\right) }\right) = \) \( {\mathbb{T}}^{n} \) for \( n \geq 2 \) . First, let \( w = \left( {{e}^{i{t}_{1}},\ldots ,{e}^{i{t}_{n}}}\right) \in {\mathbb{T}}^{n},{t}_{1},\ldots ,{t}_{n} \in \mathbb{R} \) . Then the polynomial function \( f \) defined by 3 Functional Calculus, Shilov Boundary, and Applications
\[
f\left( {{z}_{1},\ldots ,{z}_{n}}\right) = \frac{1}{{2}^{n}}\mathop{\prod }\limits_{{j = 1}}^{n}\left( {1 + {z}_{j}{e}^{-i{t}_{j}}}\right)
\]
satisfies \( f\left( w\right) = 1 \) and \( \left| {f\left( z\right) }\right| < 1 \) for all \( z \in {\mathbb{D}}^{n}, z \neq w \) . This proves \( {\mathbb{T}}^{n} \subseteq \) \( \partial \left( {P\left( {\mathbb{D}}^{n}\right) }\right) \) . It remains to verify that \( {\mathbb{T}}^{n} \) is a boundary for \( P\left( {\mathbb{D}}^{n}\right) \) . To see this, let \( f \in P\left( {\mathbb{D}}^{n}\right) \) and \( z = \left( {{z}_{1},\ldots ,{z}_{n}}\right) \in {\mathbb{D}}^{n} \) such that \( \parallel f{\parallel }_{\infty } = \left| {f\left( z\right) }\right| \) . Then the function
\[
w \rightarrow f\left( {w,{z}_{2},\ldots ,{z}_{n}}\right)
\]
\( w \in \mathbb{D} \), belongs to \( P\left( \mathbb{D}\right) \), and hence
\[
\left| {f\left( {{z}_{1},\ldots ,{z}_{n}}\right) }\right| \leq \left| {f\left( {{e}^{i{t}_{1}},{z}_{2},\ldots ,{z}_{n}}\right) }\right|
\]
for some \( {t}_{1} \in \mathbb{R} \) . Next, the function
\[
w \rightarrow f\left( {{e}^{i{t}_{1}}, w,{z}_{3},\ldots ,{z}_{n}}\right)
\]
is in \( P\left( \mathbb{D}\right) \) . As before, it follows that, for some \( {t}_{2} \in \mathbb{R} \) ,
\[
\left| {f\left( {{e}^{i{t}_{1}},{z}_{2},\ldots ,{z}_{n}}\right) }\right| \leq \left| {f\left( {{e}^{i{t}_{1}},{e}^{i{t}_{2}},{z}_{3},\ldots ,{z}_{n}}\right) }\right| .
\]
Continuing in this manner, we find \( {t}_{1},\ldots ,{t}_{n} \in \mathbb{R} \) such that
\[
\left| {f\left( {{z}_{1},\ldots ,{z}_{n}}\right) }\right| \leq \left| {f\left( {{e}^{i{t}_{1}},\ldots ,{e}^{i{t}_{n}}}\right) }\right| .
\]
This shows \( \parallel f{\parallel }_{\infty } = {\begin{Vmatrix}{\left. f\right| }_{{\mathbb{T}}^{n}}\end{Vmatrix}}_{\infty } \) . Thus \( {\mathbb{T}}^{n} \) is a boundary for \( P\left( {\mathbb{D}}^{n}\right) \) and, since \( {\mathbb{T}}^{n} \subseteq \partial \left( {P\left( {\mathbb{D}}^{n}\right) }\right) \), we get that \( {\mathbb{T}}^{n} = \partial \left( {P\left( {\mathbb{D}}^{n}\right) }\right) \) . However, \( {\mathbb{C}}^{n} \smallsetminus {\mathbb{D}}^{n} \) is connected and
\[
\partial \left( {{\mathbb{C}}^{n} \smallsetminus {\mathbb{D}}^{n}}\right) = \left\{ {z \in {\mathbb{D}}^{n} : {z}_{j} \in \mathbb{T}\text{ for at least one }j}\right\}
\]
does not equal \( {\mathbb{T}}^{n} \) when \( n \geq 2 \) .
We now introduce the notion of a boundary for an arbitrary commutative Banach algebra.
Definition 3.3.6. Let \( A \) be a commutative Banach algebra and \( \Gamma : A \rightarrow \) \( {C}_{0}\left( {\Delta \left( A\right) }\right) \) the Gelfand representation of \( A \) . A subset \( R \) of \( \Delta \left( A\right) \) is called a boundary for \( A \) if \( R \) is a boundary for \( \Gamma \left( A\right) \), the range of the Gelfand homomorphism. In particular, \( \partial \left( {\Gamma \left( A\right) }\right) \) is called the Shilov boundary of \( A \) and denoted \( \partial \left( A\right) \) .
Let \( X \) be a locally compact Hausdorff space and \( A \) a closed subalgebra of \( {C}_{0}\left( X\right) \) . Then, according to Definitions 3.3.1 and 3.3.6, we have to distinguish between boundaries for the family \( A \) of functions on \( X \) and boundaries for the commutative Banach algebra \( A \), the latter being the boundaries of \( \Gamma \left( A\right) \subseteq {C}_{0}\left( {\Delta \left( A\right) }\right) \) . However, as explained in the following remark, the two Shilov boundaries are canonically homeomorphic provided that \( A \) satisfies some natural conditions.
Remark 3.3.7. Suppose that \( A \) strongly separates the points of \( X \) . Then the mapping \( \phi : x \rightarrow {\varphi }_{x} \), where \( {\varphi }_{x}\left( f\right) = f\left( x\right) \) for \( f \in A \), is a homeomorphism from \( X \) onto \( \phi \left( X\right) \subseteq \Delta \left( A\right) \) because, by Proposition 2.2.14, \( X \) carries the weak topology defined by the functions \( f \in A \) . Moreover, for every subset \( Y \) of \( X \) ,
\[
{\begin{Vmatrix}{\left. f\right| }_{Y}\end{Vmatrix}}_{▱} = \mathop{\sup }\limits_{{y \in Y}}\left| {f\left( y\right) }\right| = \mathop{\sup }\limits_{{y \in Y}}\left| {{\varphi }_{y}\left( f\right) }\right| = {\begin{Vmatrix}{\left. \widehat{f}\right| }_{\phi \left( Y\right) }\end{Vmatrix}}_{\infty }.
\]
Therefore every boundary fo
|
Corollary 3.3.4. A point \( x \in X \) belongs to the Shilov boundary of \( A \) if and only if given any open neighbourhood \( U \) of \( x \), there exists \( f \in A \) such that
\[
\parallel f{\left| {}_{X \smallsetminus U}{\parallel }_{\infty } < \parallel f\right| }_{U}{\parallel }_{\infty }
\]
|
Proof. First, let \( x \in X \smallsetminus \partial \left( A\right) \) . Then \( U = X \smallsetminus \partial \left( A\right) \) is an open neighbourhood of \( x \) and because \( \partial \left( A\right) \) is a boundary, we have for all \( f \in A \) ,
\[
\parallel f{\left| {}_{U}{\parallel }_{\infty } \leq \parallel f{\parallel }_{\infty } = \parallel f\right| }_{\partial \left( A\right) }{\parallel }_{\infty } = {\begin{Vmatrix}{\left. f\right| }_{X \smallsetminus U}\end{Vmatrix}}_{\infty }.
\]
Conversely, let \( x \in \partial \left( A\right) \) and suppose there exists an open neighbourhood \( U \) of \( x \) such that
\[
\parallel f{\left| U\right| }_{\infty } \leq \parallel f{\left| {}_{X \smallsetminus U}\right| }_{\infty }
\]
for all \( f \in A \) . Then \( X \smallsetminus U \) is a boundary for \( A \), so that \( \partial \left( A\right) \subseteq X \smallsetminus U \) . This contradicts \( x \in \partial \left( A\right) \) .
|
Theorem 2.2.1 (The division algorithm). Let \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) denote the polynomial ring in \( n \) variables over a field \( K \) and fix a monomial order \( < \) on \( S \) . Let \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \) be nonzero polynomials of \( S \) . Then, given a polynomial \( 0 \neq f \in S \), there exist polynomials \( {f}_{1},{f}_{2},\ldots ,{f}_{s} \) and \( {f}^{\prime } \) of \( S \) with
\[
f = {f}_{1}{g}_{1} + {f}_{2}{g}_{2} + \cdots + {f}_{s}{g}_{s} + {f}^{\prime },
\]
(2.2)
such that the following conditions are satisfied:
(i) if \( {f}^{\prime } \neq 0 \) and if \( u \in \operatorname{supp}\left( {f}^{\prime }\right) \), then none of the initial monomials \( {\operatorname{in}}_{ < }\left( {g}_{1}\right) ,{\operatorname{in}}_{ < }\left( {g}_{2}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) \) divides \( u \), i.e. no monomial \( u \in \operatorname{supp}\left( {f}^{\prime }\right) \) belongs to \( \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,{\operatorname{in}}_{ < }\left( {g}_{2}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) }\right) \) ;
(ii) if \( {f}_{i} \neq 0 \), then
\[
{\operatorname{in}}_{ < }\left( f\right) \geq {\operatorname{in}}_{ < }\left( {{f}_{i}{g}_{i}}\right)
\]
The right-hand side of equation (2.2) is said to be a standard expression for \( f \) with respect to \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \), and the polynomial \( {f}^{\prime } \) is said to be a remainder of \( f \) with respect to \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \) . One also says that \( f \) reduces to \( {f}^{\prime } \) with respect \( {g}_{1},\ldots ,{g}_{s} \) .
Proof (of Theorem 2.2.1). Let \( I = \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) }\right) \) . If none of the monomials \( u \in \operatorname{supp}\left( f\right) \) belongs to \( I \), then the desired expression can be obtained by setting \( {f}^{\prime } = f \) and \( {f}_{1} = \cdots = {f}_{s} = 0 \) .
Now, suppose that a monomial \( u \in \operatorname{supp}\left( f\right) \) belongs to \( I \) and write \( {u}_{0} \) for the monomial which is biggest with respect to \( < \) among the monomials \( u \in \operatorname{supp}\left( f\right) \) belonging to \( I \) . Let, say, in \( {}_{ < }\left( {g}_{{i}_{0}}\right) \) divide \( {u}_{0} \) and \( {w}_{0} = {u}_{0}/{\operatorname{in}}_{ < }\left( {g}_{{i}_{0}}\right) \) . We rewrite
\[
f = {c}_{0}^{\prime }{c}_{{i}_{0}}^{-1}{w}_{0}{g}_{{i}_{0}} + {h}_{1}
\]
where \( {c}_{0}^{\prime } \) is the coefficient of \( {u}_{0} \) in \( f \) and \( {c}_{{i}_{0}} \) is that of \( {\operatorname{in}}_{ < }\left( {g}_{{i}_{0}}\right) \) in \( {g}_{{i}_{0}} \) . One has
\[
{\operatorname{in}}_{ < }\left( {{w}_{0}{g}_{{i}_{0}}}\right) = {w}_{0}{\operatorname{in}}_{ < }\left( {g}_{{i}_{0}}\right) = {u}_{0} \leq {\operatorname{in}}_{ < }\left( f\right) .
\]
If either \( {h}_{1} = 0 \) or, in case of \( {h}_{1} \neq 0 \), none of the monomials \( u \in \operatorname{supp}\left( {h}_{1}\right) \) belongs to \( I \), then \( f = {c}_{0}^{\prime }{c}_{{i}_{0}}^{-1}{w}_{0}{g}_{{i}_{0}} + {h}_{1} \) is a standard expression of \( f \) with respect to \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \) and \( {h}_{1} \) is a remainder of \( f \) .
If a monomial of \( \operatorname{supp}\left( {h}_{1}\right) \) belongs to \( I \) and if \( {u}_{1} \) is the monomial which is biggest with respect to \( < \) among the monomials \( u \in \operatorname{supp}\left( {h}_{1}\right) \) belonging to \( I \), then one has
\[
{u}_{0} > {u}_{1}
\]
In fact, if a monomial \( u \) with \( u > {u}_{0}\left( { = {\operatorname{in}}_{ < }\left( {{w}_{0}{g}_{{i}_{0}}}\right) }\right) \) belongs to \( \operatorname{supp}\left( {h}_{1}\right) \) , then \( u \) must belong to \( \operatorname{supp}\left( f\right) \) . This is impossible. Moreover, \( {u}_{0} \) itself cannot belong to \( \operatorname{supp}\left( {h}_{1}\right) \) .
Let, say, in \( < \left( {g}_{{i}_{1}}\right) \) divide \( {u}_{1} \) and \( {w}_{1} = {u}_{1}/{\operatorname{in}}_{ < }\left( {g}_{{i}_{1}}\right) \) . Again, we rewrite
\[
f = {c}_{0}^{\prime }{c}_{{i}_{0}}^{-1}{w}_{0}{g}_{i}{i}_{0} + {c}_{1}^{\prime }{c}_{{i}_{1}}^{-1}{w}_{1}{g}_{{i}_{1}} + {h}_{2},
\]
where \( {c}_{1}^{\prime } \) is the coefficient of \( {u}_{1} \) in \( {h}_{1} \) and \( {c}_{{i}_{1}} \) is that of \( {\operatorname{in}}_{ < }\left( {g}_{{i}_{1}}\right) \) in \( {g}_{{i}_{1}} \) . One has
\[
{\operatorname{in}}_{ < }\left( {{w}_{1}{g}_{{i}_{1}}}\right) < {\operatorname{in}}_{ < }\left( {{w}_{0}{g}_{{i}_{0}}}\right) \leq {\operatorname{in}}_{ < }\left( f\right) .
\]
Continuing these procedures yields the descending sequence
\[
{u}_{0} > {u}_{1} > {u}_{2} > \cdots
\]
Lemma 2.1.7 thus guarantees that these procedures will stop after a finite number of steps, say \( N \) steps, and we obtain an expression
\[
f = \mathop{\sum }\limits_{{q = 0}}^{{N - 1}}{c}_{q}^{\prime }{c}_{{i}_{q}}^{-1}{w}_{q}{g}_{{i}_{q}} + {h}_{N}
\]
where either \( {h}_{N} = 0 \) or, in case \( {h}_{N} \neq 0 \), none of the monomials \( u \in \operatorname{supp}\left( {h}_{N}\right) \) belongs to \( I \), and where
\[
{\operatorname{in}}_{ < }\left( {{w}_{q}{g}_{{i}_{q}}}\right) < \cdots < {\operatorname{in}}_{ < }\left( {{w}_{0}{g}_{{i}_{0}}}\right) \leq {\operatorname{in}}_{ < }\left( f\right) .
\]
Thus, by letting \( \mathop{\sum }\limits_{{i = 1}}^{s}{f}_{i}{g}_{i} = \mathop{\sum }\limits_{{q = 0}}^{{N - 1}}{c}_{q}^{\prime }{c}_{{i}_{q}}^{-1}{w}_{q}{g}_{{i}_{q}} \) and \( {f}^{\prime } = {h}_{N} \), we obtain an expression \( f = \mathop{\sum }\limits_{{i = 1}}^{s}{f}_{i}{g}_{i} + {f}^{\prime } \) satisfying the conditions (i) and (ii), as desired.
Example 2.2.2. Let \( { < }_{\text{lex }} \) denote the lexicographic order on \( S = K\left\lbrack {x, y, z}\right\rbrack \) induced by \( x > y > z \) . Let \( {g}_{1} = {x}^{2} - z,{g}_{2} = {xy} - 1 \) and \( f = {x}^{3} - {x}^{2}y - {x}^{2} - 1 \) . Each of
\[
f = {x}^{3} - {x}^{2}y - {x}^{2} - 1 = x\left( {{g}_{1} + z}\right) - {x}^{2}y - {x}^{2} - 1
\]
\[
= x{g}_{1} - {x}^{2}y - {x}^{2} + {xz} - 1 = x{g}_{1} - \left( {{g}_{1} + z}\right) y - {x}^{2} + {xz} - 1
\]
\[
= x{g}_{1} - y{g}_{1} - {x}^{2} + {xz} - {yz} - 1 = x{g}_{1} - y{g}_{1} - \left( {{g}_{1} + z}\right) + {xz} - {yz} - 1
\]
\[
= \left( {x - y - 1}\right) {g}_{1} + \left( {{xz} - {yz} - z - 1}\right)
\]
and
\[
f = {x}^{3} - {x}^{2}y - {x}^{2} - 1 = x\left( {{g}_{1} + z}\right) - {x}^{2}y - {x}^{2} - 1
\]
\[
= x{g}_{1} - {x}^{2}y - {x}^{2} + {xz} - 1 = x{g}_{1} - x\left( {{g}_{2} + 1}\right) - {x}^{2} + {xz} - 1
\]
\[
= x{g}_{1} - x{g}_{2} - {x}^{2} + {xz} - x - 1 = x{g}_{1} - x{g}_{2} - \left( {{g}_{1} + z}\right) + {xz} - x - 1
\]
\[
= \left( {x - 1}\right) {g}_{1} - x{g}_{2} + \left( {{xz} - x - z - 1}\right)
\]
is a standard expression of \( f \) with respect to \( {g}_{1} \) and \( {g}_{2} \), and each of \( {xz} - {yz} - \) \( z - 1 \) and \( {xz} - x - z - 1 \) is a remainder of \( f \) .
Until the end of the present section, we work with a fixed monomial order \( < \) on \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) . Example 2.2.2 says that in the division algorithm a remainder of \( f \) is, in general, not unique. However,
Lemma 2.2.3. If \( \mathcal{G} = \left\{ {{g}_{1},\ldots ,{g}_{s}}\right\} \) is a Gröbner basis of \( I = \left( {{g}_{1},\ldots ,{g}_{s}}\right) \) , then for any nonzero polynomial \( f \) of \( S \), there is a unique remainder of \( f \) with respect to \( {g}_{1},\ldots ,{g}_{s} \) .
Proof. Suppose there exist remainders \( {f}^{\prime } \) and \( {f}^{\prime \prime } \) with respect to \( {g}_{1},\ldots ,{g}_{s} \) with \( {f}^{\prime } \neq {f}^{\prime \prime } \) . Since \( 0 \neq {f}^{\prime } - {f}^{\prime \prime } \in I \), the initial monomial \( w = {\operatorname{in}}_{ < }\left( {{f}^{\prime } - {f}^{\prime \prime }}\right) \) must belong to \( {\operatorname{in}}_{ < }\left( I\right) \) . However, since \( w \in \operatorname{supp}\left( {f}^{\prime }\right) \cup \operatorname{supp}\left( {f}^{\prime \prime }\right) \), it follows that none of the monomials \( {\operatorname{in}}_{ < }\left( {g}_{1}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) \) divides \( w \) . Hence \( {\operatorname{in}}_{ < }\left( I\right) \neq \) \( \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) }\right) \) : a contradiction.
Corollary 2.2.4. If \( \mathcal{G} = \left\{ {{g}_{1},\ldots ,{g}_{s}}\right\} \) is a Gröbner basis of \( I = \left( {{g}_{1},\ldots ,{g}_{s}}\right) \) , then a nonzero polynomial \( f \) of \( S \) belongs to \( I \) if and only if the unique remainder of \( f \) with respect to \( {g}_{1},\ldots ,{g}_{s} \) is 0 .
Proof. First, in general, if a remainder of a nonzero polynomial \( f \) of \( S \) with respect to \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \) is 0, then \( f \) belongs to \( I = \left( {{g}_{1},{g}_{2},\ldots ,{g}_{s}}\right) \) .
Second, suppose that a nonzero polynomial \( f \) belongs to \( I \) and \( f = {f}_{1}{g}_{1} + \) \( {f}_{2}{g}_{2} + \cdots + {f}_{s}{g}_{s} + {f}^{\prime } \) is a standard expression of \( f \) with respect to \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \) . Since \( f \in I \), one has \( {f}^{\prime } \in I \) . If \( {f}^{\prime } \neq 0 \), then \( {\operatorname{in}}_{ < }\left( {f}^{\prime }\right) \in {\operatorname{in}}_{ < }\left( I\right) \) . Since \( \mathcal{G} \) is a Gröbner basis of \( I \), it follows that \( {\operatorname{in}}_{ < }\left( I\right) = \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,{\operatorname{in}}_{ < }\left( {g}_{2}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) }\right) \) . However, since \( {f}^{\prime } \) is a remainder, none of the monomials \( u \in \operatorname{supp}\left( {f}^
|
Theorem 2.2.1 (The division algorithm). Let \( S = K\left\lbrack {{x}_{1},\ldots ,{x}_{n}}\right\rbrack \) denote the polynomial ring in \( n \) variables over a field \( K \) and fix a monomial order \( < \) on \( S \) . Let \( {g}_{1},{g}_{2},\ldots ,{g}_{s} \) be nonzero polynomials of \( S \) . Then, given a polynomial \( 0 \neq f \in S \), there exist polynomials \( {f}_{1},{f}_{2},\ldots ,{f}_{s} \) and \( {f}^{\prime } \) of \( S \) with
\[
f = {f}_{1}{g}_{1} + {f}_{2}{g}_{2} + \cdots + {f}_{s}{g}_{s} + {f}^{\prime },
\]
such that the following conditions are satisfied:
(i) if \( {f}^{\prime } \neq 0 \) and if \( u \in \operatorname{supp}\left( {f}^{\prime }\right) \), then none of the initial monomials \( {\operatorname{in}}_{ < }\left( {g}_{1}\right) ,{\operatorname{in}}_{ < }\left( {g}_{2}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) \) divides \( u \), i.e. no monomial \( u \in \operatorname{supp}\left( {f}^{\prime }\right) \) belongs to \( \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,{\operatorname{in}}_{ < }\left( {g}_{2}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) }\right) \) ;
(ii) if \( {f}_{i} \neq 0 \), then
\[
{\operatorname{in}}_{ < }\left( f\right) \geq {\operatorname{in}}_{ < }\left( {{f}_{i}{g}_{i}}\right)
\]
|
Proof (of Theorem 2.2.1). Let \( I = \left( {{\operatorname{in}}_{ < }\left( {g}_{1}\right) ,\ldots ,{\operatorname{in}}_{ < }\left( {g}_{s}\right) }\right) \) . If none of the monomials \( u \in \operatorname{supp}\left( f\right) \) belongs to \( I \), then the desired expression can be obtained by setting \( {f}^{\prime } = f \) and \( {f}_{1} = \cdots = {f}_{s} = 0 \) .
Now, suppose that a monomial \( u \in \operatorname{supp}\left( f\right) \) belongs to \( I \) and write \( {u}_{0} \) for the monomial which is biggest with respect to \( < \) among the monomials \( u \in \operatorname{supp}\left( f\right) \) belonging to \( I \) . Let, say, in \( {}_{ < }\left( {g}_{{i}_{0}}\right) \) divide \( {u}_{0} \) and \( {w}_{0} = {u}_{0}/{\operatorname{in}}_{ < }\left( {g}_{{i}_{0}}\right) \) . We rewrite
\[
f = {c}_{0}^{\prime }{c}_{{i}_{0}}^{-1}{w}_{0}{g}_{{i}_{0}} + {h}_{1}
\]
where \( {c}_
|
Corollary 3. For some constant \( c = c\left( f\right) \), we have
\[
{\operatorname{ord}}_{p}\mathop{\prod }\limits_{\substack{{\text{ cond }\psi = {p}^{t}} \\ {{n}_{0} \leq t \leq n} }}B\left( {\psi ,\mu }\right) = m{p}^{n} + {\lambda n} + c\left( f\right)
\]
Proof. Since
\[
\mathop{\prod }\limits_{\substack{{\zeta {p}^{n} = 1} \\ {\zeta \neq 1} }}\left( {\zeta - 1}\right) = {p}^{n}
\]
the formula is immediate, since the product taken for \( {n}_{0} \leq t \leq n \) differs by only a finite number of factors (depending on \( {n}_{0} \) ) from the product taken over all \( t \), and we can apply Corollary 2 to get the desired order.
In the light of Corollary 3, we shall call \( m \) the exponential invariant, and \( \lambda \) the linear invariant.
Let \( f \) be as above, the power series associated with \( {\alpha }_{ * }\mu \), and put
\[
{c}_{r}^{\left( n\right) } = \mathop{\sum }\limits_{\eta }{\mu }_{n + 1}\left( {\eta {\gamma }^{r}{\;\operatorname{mod}\;{p}^{n + 1}}}\right) .
\]
Then
\[
f\left( X\right) \equiv \mathop{\sum }\limits_{{r = 0}}^{{{p}^{n} - 1}}{c}_{r}^{\left( n\right) }{\left( 1 + X\right) }^{r}{\;\operatorname{mod}\;{h}_{n}}
\]
\[
\equiv \mathop{\sum }\limits_{{r = 0}}^{{{p}^{n} - 1}}{a}_{r}^{\left( n\right) }{X}^{r}\;{\;\operatorname{mod}\;{h}_{n}},
\]
where the coefficients \( {a}_{r}^{\left( n\right) } \) are obtained from the change of basis from
\[
1, X,\ldots ,{X}^{{p}^{n} - 1}
\]
to
\[
1,1 + X,\ldots ,{\left( 1 + X\right) }^{{p}^{n} - 1}\text{.}
\]
We can rewrite \( {c}_{r}^{\left( n\right) } \) in terms of the variable \( u = {\gamma }^{r} \), namely
\[
{c}^{\left( n\right) }\left( u\right) = \mathop{\sum }\limits_{\eta }{\mu }_{n + 1}\left( {{\eta u}{\;\operatorname{mod}\;{p}^{n + 1}}}\right) .
\]
These coefficients \( {c}^{\left( n\right) }\left( u\right) \) will be called the Iwasawa coefficients.
Theorem 1.3. Let \( n \) be an integer \( \geq 0 \) such that \( {c}_{r}^{\left( n\right) } \) is a p-unit for some integer \( r \) with
\[
0 \leq r \leq {p}^{n} - 1
\]
Then the exponential Iwasawa invariant \( m \) of \( \mu \) is equal to 0, and we have \( \lambda \leq {p}^{n} \)
Proof. Some coefficient \( {a}_{r}^{\left( n\right) } \) must also be a \( p \) -unit with \( r \) in the same range, and we can write
\[
f\left( X\right) = \mathop{\sum }\limits_{{r = 0}}^{{{p}^{n} - 1}}{a}_{r}^{\left( n\right) }{X}^{r} + {g}_{1}\left( X\right) {X}^{{p}^{n}} + p{g}_{2}\left( X\right) ,
\]
where \( {g}_{1}\left( X\right) ,{g}_{2}\left( X\right) \in \mathfrak{o}\left\lbrack \left\lbrack X\right\rbrack \right\rbrack \) . Hence the coefficient \( {a}_{r} \) of \( f\left( X\right) \) is itself a \( p \) -unit, whence the theorem follows.
We shall sometimes deal with certain measures derived by the following operation from \( \mu \) . Let \( s \in {\mathbf{Z}}_{p} \) . We define the \( s \) -th twist of \( \mu \) to be the measure defined on \( {Z}^{ * } \) by
\[
{\mu }^{\left( s\right) }\left( a\right) = \langle a{\rangle }^{s}\mu \left( a\right)
\]
and equal to 0 outside \( {Z}^{ * } \) . In that case, the coefficients \( {c}_{r}^{\left( n\right) } \) should be indexed by \( s \), i.e.
\[
{c}_{r, s}^{\left( n\right) } = {c}_{r}^{\left( n\right) }{\gamma }^{rs}
\]
Since \( {\gamma }^{rs} \) is a \( p \) -adic unit, it follows that the same power of \( p \) divides all \( {c}_{r, s}^{\left( n\right) } \) as divides \( {c}_{r}^{\left( n\right) } \) . Thus Theorem 1.3 also applies to the twisted measure and the power series \( {f}_{s} \) associated with \( {\alpha }_{ * }\left( {\mu }^{\left( s\right) }\right) \) instead of \( f \) in the theorem, and we find:
Theorem 1.4. Let \( {m}_{s},{\lambda }_{s} \) be the Iwasawa invariants of \( {\mu }^{\left( s\right) } \) . If \( {m}_{s} = 0 \) for some \( s \), then \( {m}_{s} = 0 \) for all \( s \) . Suppose this is the case, and let \( n \) be the positive integer such that
\[
{p}^{n - 1} \leq {\lambda }_{0} < {p}^{n}
\]
Then we also have
\[
{p}^{n - 1} \leq {\lambda }_{s} < {p}^{n}
\]
for all s.
## §2. Application to the Bernoulli Distributions
Let \( {\mathbf{B}}_{k} \) be the \( k \) -th Bernoulli polynomial (cf. Chapter 2). We had defined the distribution \( {E}_{k} \) at level \( N \) by
\[
{E}_{k}^{\left( N\right) }\left( x\right) = {N}^{k - 1}\frac{1}{k}{\mathbf{B}}_{k}\left( \left\langle \frac{x}{N}\right\rangle \right) .
\]
We shall now use
\[
N = d{p}^{n}
\]
where \( d \) is a positive integer prime to the prime number \( p \) .
We continue using the notation of the preceding section. An element of \( Z = \mathbf{Z}\left( d\right) \times {\mathbf{Z}}_{p} \) is described by its two components
\[
x = \left( {{x}_{0},{x}_{p}}\right)
\]
Let \( c \in \mathbf{Z}{\left( d\right) }^{ * } \times {\mathbf{Z}}_{p}^{ * } = \lim \mathbf{Z}{\left( d{p}^{n}\right) }^{ * } \) . We define
\[
{E}_{k, c}^{\left( N\right) }\left( x\right) = {E}_{k}^{\left( N\right) }\left( x\right) - {c}_{p}^{k}{E}_{k}^{\left( N\right) }\left( {{c}^{-1}x}\right)
\]
for \( x \in \mathbf{Z}\left( N\right) \) . The multiplication \( {c}^{-1}x \) is defined in \( \mathbf{Z}{\left( N\right) }^{ * } \) .
Note. In Chapter 2, we took \( c \) to be a rational number. This is not necessary, and restricts possible applications too much. When \( c \) occurs as a coefficient in Chapter 2, we must use \( {c}_{p} \) instead of \( c \), i.e. we must use its projection on \( {\mathbf{Z}}_{p}^{ * } \) . When \( c \) occurs inside a diamond bracket, then no change is to be made for the present case. For instance, we have
\[
\text{E 1.}\;{E}_{1, c}^{\left( N\right) }\left( x\right) = \left\langle \frac{x}{N}\right\rangle - {c}_{p}\left\langle \frac{{c}^{-1}x}{N}\right\rangle + \frac{1}{2}\left( {{c}_{p} - 1}\right) \text{.}
\]
Similarly, formula E 2 and Theorem 2.2 of Chapter 2 yield the relation
E 2.
\[
{E}_{k, c}\left( x\right) = {x}_{p}^{k - 1}{E}_{1, c}\left( x\right)
\]
symbolically for \( x \in Z \) . We then obtain the integral representations of the Bernoulli numbers as follows.
\[
\frac{1}{k}{B}_{k} = \frac{1}{1 - {c}_{p}^{k}}{\int }_{Z}{x}_{p}^{k - 1}d{E}_{1, c}\left( x\right)
\]
provided only that \( {c}_{p}^{k} \neq 1 \) . Furthermore, if \( \chi \) is a character of conductor \( m = {m}_{\chi } \) dividing \( d{p}^{n} \) for some \( n \), then \( \chi \) defines in the usual way a function on \( \mathbf{Z}\left( N\right) \) for \( m \mid N \) by composition
\[
\mathbf{Z}\left( N\right) \rightarrow \mathbf{Z}\left( m\right) \overset{\chi }{ \rightarrow }{\mathfrak{o}}^{ * }
\]
and \( \chi \) is defined to be 0 on elements of \( \mathbf{Z}\left( m\right) \) not prime to \( m \) . Then we define
\[
\frac{1}{k}{B}_{k,\chi } = {\int }_{Z}{\chi d}{E}_{k}
\]
Note. This definition made by taking into account the conductor of \( \chi \) is more appropriate than that of Chapter 2, §2. There we dealt only with characters of \( {\mathbf{Z}}_{p}^{ * } \), so it made little difference, only for the trivial character.
More generally, if \( \varphi \) is a locally constant function (step function) on \( Z \) , then we can define
\[
\frac{1}{k}{B}_{k,\varphi } = {\int }_{Z}{\varphi d}{E}_{k}
\]
Then
(1)
\[
{\int }_{Z}\varphi \left( {{x}_{0},{x}_{p}}\right) {x}_{p}^{k - 1}d{E}_{1, c}\left( x\right) = \frac{1}{k}{B}_{k,\varphi } - {c}_{p}^{k}\frac{1}{k}{B}_{k,\varphi \circ c}.
\]
In particular, if \( \varphi \) is a character \( \chi \), then
\[
{\int }_{Z}\chi \left( x\right) {x}_{p}^{k - 1}d{E}_{1, c}\left( x\right) = \left( {1 - \chi \left( c\right) {c}_{p}^{k}}\right) \frac{1}{k}{B}_{k,\chi }.
\]
We define the p-adic L-function by the integral
\[
{L}_{p}\left( {1 - s,\chi }\right) = \frac{-1}{1 - \chi \left( c\right) \langle c{\rangle }_{p}^{s}}{\int }_{{Z}^{ * }}\chi \left( a\right) \langle a{\rangle }_{p}^{s}{a}_{p}^{-1}d{E}_{1, c}\left( a\right) .
\]
If the conductor of \( \chi \) is \( d{p}^{n} \) for some \( n \geq 0 \), then the support of the integral is really on the set
\[
{Z}^{* * } = \mathbf{Z}{\left( d\right) }^{ * } \times {\mathbf{Z}}_{p}^{ * }
\]
Let \( \omega = {\omega }_{p} \) be the Teichmuller character, and put
\[
{\chi }_{k} = \chi {\omega }^{-k}
\]
Theorem 2.1. For every integer \( k \geq 1 \) and character \( \chi \) of conductor \( d{p}^{n} \)
with \( n \geq 0 \), we have
\[
{L}_{p}\left( {1 - k,\chi }\right) = - \left( {1 - {\chi }_{k}\left( p\right) {p}^{k - 1}}\right) \frac{1}{k}{B}_{k,{\chi }_{k}}.
\]
Proof. We have:
\[
- \left( {1 - {\chi }_{k}\left( c\right) {c}_{p}^{k}}\right) {L}_{p}\left( {1 - k,\chi }\right) = {\int }_{{Z}^{ * }}{\chi }_{k}\left( a\right) {a}_{p}^{k - 1}d{E}_{1, c}\left( a\right) .
\]
Write
\[
{\int }_{{Z}^{ * }} = {\int }_{Z} - {\int }_{pZ}
\]
Let \( N = d{p}^{n + 1} \) . Then
\[
{\int }_{pZ} = \mathop{\lim }\limits_{{n \rightarrow \infty }}\mathop{\sum }\limits_{{y = 0}}^{{\left( {N/p}\right) - 1}}{\chi }_{k}\left( p\right) {p}^{k - 1}{\chi }_{k}\left( y\right) {y}^{k - 1}{E}_{1, c}\left( \left\langle \frac{py}{N}\right\rangle \right)
\]
\[
= {\chi }_{k}\left( p\right) {p}^{k - 1}\mathop{\lim }\limits_{{n \rightarrow \infty }}\mathop{\sum }\limits_{{y = 0}}^{{\left( {N/p}\right) - 1}}{\chi }_{k}\left( y\right) {y}^{k - 1}{E}_{1, c}\left( \left\langle \frac{y}{N/p}\right\rangle \right)
\]
\[
= {\chi }_{k}\left( p\right) {p}^{k - 1}\left( {1 - {\chi }_{k}\left( c\right) {c}_{p}^{k}}\right) \frac{1}{k}{B}_{k,{\chi }_{k}}.
\]
The theorem follows at once.
We now let
\( \theta = \) even character on \( \mathbf{Z}{\left( dp\right) }^{ * },\theta \neq 1 \), cond \( \theta = d \) or \( {dp} \) .
\( \chi = {\theta \psi } \) where \( \psi \) is a character on \( 1 + p{\mathbf{Z}}_{p} \) .
Then
\[
\left( {1 - {\chi }_{k}\left( p\right) {p}^{k - 1}}\right) \frac{1}{k}{B}_{k,{\chi }_{k}} = \frac{1}{1 - \chi \left( c\right) \langle c{\rangle }_{p}^{k}}{\int }_{{Z}^{* * }}\psi \left( a\right) \theta {\omega }^{-k}\left( a\right) {a}_{p}^{k - 1}d{E}_{1, c}\left( a\right)
\]
|
Corollary 3. For some constant \( c = c\left( f\right) \), we have
\[
{\operatorname{ord}}_{p}\mathop{\prod }\limits_{\substack{{\text{ cond }\psi = {p}^{t}} \\ {{n}_{0} \leq t \leq n} }}B\left( {\psi ,\mu }\right) = m{p}^{n} + {\lambda n} + c\left( f\right)
\]
|
Since
\[
\mathop{\prod }\limits_{\substack{{\zeta {p}^{n} = 1} \\ {\zeta \neq 1} }}\left( {\zeta - 1}\right) = {p}^{n}
\]
the formula is immediate, since the product taken for \( {n}_{0} \leq t \leq n \) differs by only a finite number of factors (depending on \( {n}_{0} \) ) from the product taken over all \( t \), and we can apply Corollary 2 to get the desired order.
|
Proposition 5.42. Suppose \( X \) is a Hausdorff locally convex space, \( T : X \rightarrow X \) is a continuous linear transformation, and \( U \) is a barrel neighborhood of 0 subject to:
(α) \( T\left( U\right) \) does not contain a nontrivial subspace of \( X \), and
(β) \( T\left( U\right) \) is covered by \( N \) translates of \( \frac{1}{2}\operatorname{int}\left( U\right) \) ; that is there exists \( {w}_{1},\ldots ,{w}_{N} \in \) X for which
\[
T\left( U\right) \subset \mathop{\bigcup }\limits_{{j = 1}}^{N}\left\lbrack {{w}_{j} + \frac{1}{2}\operatorname{int}\left( U\right) }\right\rbrack .
\]
Consider the chain of subspaces \( {K}_{j} = \ker {\left( I - T\right) }^{j} \) :
\[
\{ 0\} \subset {K}_{1} \subset {K}_{2} \subset \cdots
\]
Then:
(a) The chain stabilizes beyond \( j = N : {K}_{N} = {K}_{N + 1} = \cdots \) ;
(b) Every \( {K}_{j} \) has dimension \( \leq N \) ; and
(c) \( \dim {K}_{1} = \dim \ker \left( {I - T}\right) \leq \dim \left( {X/\left( {I - T}\right) \left( X\right) }\right) \) .
In the proof, the following lemma will be used several times:
Lemma 5.43. Assume \( X, T \), and \( U \) are as in Proposition 5.42. Suppose
\[
0 = {M}_{0} \subsetneqq {M}_{1} \subsetneqq {M}_{2} \subsetneqq \cdots \subsetneqq {M}_{n}
\]
is a chain of finite dimensional subspaces of \( X \) for which \( \left( {I - T}\right) {M}_{k} \subset {M}_{k - 1} \) when \( k \geq 1 \) . Then \( n \leq N \) .
Preliminary Observation: Let \( {p}_{U} \) denote the Minkowski functional associated with \( U \) . By Theorem 3.7, \( {p}_{U} \) is a continuous seminorm since \( U \) is a neighborhood of 0 and is convex and balanced. Letting \( {I}_{x} \) be as in the definition of \( {p}_{U} \), then
\[
{I}_{x} = \{ t > 0 : x \in {tU}\} = \left\{ {t > 0 : {t}^{-1}x \in U}\right\} ,
\]
a relatively closed subset of the interval \( \left( {0,\infty }\right) \) since \( U \) is closed. Thus (unless \( \left. {{p}_{U}\left( x\right) = 0}\right) ,{I}_{x} = \left\lbrack {{p}_{U}\left( x\right) ,\infty }\right) \), so \( x \in U \Leftrightarrow 1 \in \left\lbrack {{p}_{U}\left( x\right) ,\infty }\right) \Leftrightarrow {p}_{U}\left( x\right) \leq 1 \) . That is:
\[
U = \left\{ {x \in X : {p}_{U}\left( x\right) \leq 1}\right\}
\]
(*)
(This has been noted before, using sequences.)
Also, int \( \left( U\right) = \lbrack 0,1)U \) (Theorem 2.15), so if \( x \in \operatorname{int}\left( U\right) \), then \( x = {ty} \) for some \( y \in U \) and \( t \in \lbrack 0,1) \), giving \( {p}_{U}\left( x\right) = {p}_{U}\left( {ty}\right) = t{p}_{U}\left( y\right) \leq t \cdot 1 < 1 \) . On the other hand, if \( {p}_{U}\left( x\right) < 1 \), choose \( t \) for which \( {p}_{U}\left( x\right) < t < 1 \) . Then \( {p}_{U}\left( {{t}^{-1}x}\right) = \) \( {t}^{-1}{p}_{U}\left( x\right) < {t}^{-1}t = 1 \), so \( {t}^{-1}x \in U \) and \( x = t \cdot {t}^{-1}x \in \lbrack 0,1)U = \operatorname{int}\left( U\right) \) . Hence
\[
\operatorname{int}\left( U\right) = \left\{ {x \in X : {p}_{U}\left( x\right) < 1}\right\}
\]
\( \left( {* * }\right) \)
Proof of Lemma 5.43. The first thing to note is that \( {p}_{U} \) is a norm on each \( {M}_{k} \) . This is by induction on \( k \), and is trivial when \( k = 0 \) . As for \( k \rightarrow k + 1 \), suppose \( x \in {M}_{k + 1} \) and \( {p}_{U}\left( x\right) = 0 \) . Letting \( \mathbb{F} \) denote the scalar field \( \left( {\mathbb{R}\text{or}\mathbb{C}}\right) \), if \( c \in \mathbb{F} \), then \( {p}_{U}\left( {cx}\right) = 0 \) as well, so \( {cx} \in U \) for all \( c \in F \) . Thus \( {cT}\left( x\right) = T\left( {cx}\right) \in T\left( U\right) \) for all \( c \in F \), that is \( \mathbb{F} \cdot T\left( x\right) \) is a subspace of \( X \) contained in \( T\left( U\right) \) . By assumption, this must be trivial, so \( T\left( x\right) = 0 \) . Hence \( x = x - T\left( x\right) = \left( {I - T}\right) \left( x\right) \in {M}_{k} \), so that \( x = 0 \) by the induction hypothesis \( \left( {p}_{U}\right. \) is a seminorm on \( \left. {M}_{k}\right) \) .
Next, there is only one way to make a finite-dimensional space into a Hausdorff locally convex space (Proposition 2.9), and on each \( {M}_{k} \), the norm topology from \( {p}_{U} \) does that, so \( {p}_{U} \) gives the induced topology on each \( {M}_{k} \) . Also, \( U \cap {M}_{k} \) is not contained in \( {M}_{k - 1}\left( {U\bigcap {M}_{k}}\right. \) is absorbent in \( \left. {M}_{k}\right) \), and \( {2U}\bigcap {M}_{k} \) is compact. By \( \left( *\right) \), if \( x \in U \cap {M}_{k} \) and \( y \notin {2U} \), then \( {p}_{U}\left( x\right) \leq 1 \) while \( {p}_{U}\left( y\right) \geq 2 \) . Since \( {p}_{U}\left( y\right) = {p}_{U}\left( {x + \left( {y - x}\right) }\right) \leq {p}_{U}\left( x\right) + {p}_{U}\left( {y - x}\right) : {p}_{U}\left( {y - x}\right) \geq 1. \)
For \( k = 1,\ldots, n \), choose any \( {y}_{k} \in U \cap {M}_{k} - {M}_{k - 1} \) . As a function on \( {M}_{k - 1} \) ,
\[
{f}_{k}\left( z\right) = {p}_{U}\left( {{y}_{k} - z}\right)
\]
has a value \( \leq 1 \) at \( z = 0 \in U \cap {M}_{k - 1} \), while it has values \( \geq 1 \) for \( z \in {M}_{k - 1} - {2U} \) , so the minimum of \( {f}_{k} \) on the compact set \( {2U}\bigcap {M}_{k - 1} \) is a minimum on \( {M}_{k - 1} \) . Let \( {z}_{k} \) be a point where this minimum is achieved, with \( {t}_{k} = {p}_{U}\left( {{y}_{k} - {z}_{k}}\right) \) . Set \( {x}_{k} = {t}_{k}^{-1}\left( {{y}_{k} - {z}_{k}}\right) \) . Observe the following:
(i) \( {p}_{U}\left( {x}_{k}\right) = {p}_{U}\left( {{t}_{k}^{-1}\left( {{y}_{k} - {z}_{k}}\right) }\right) = {t}_{k}^{-1}{p}_{U}\left( {{y}_{k} - {z}_{k}}\right) = 1 \), so \( {x}_{k} \in U \) by \( \left( *\right) \) .
(ii) If \( z \in {M}_{k - 1} \), then
\[
{p}_{U}\left( {{x}_{k} - z}\right) = {p}_{U}\left( {{t}_{k}^{-1}\left( {{y}_{k} - {z}_{k}}\right) - z}\right)
\]
\[
= {t}_{k}^{-1}{p}_{U}\left( {{y}_{k} - \left( {{z}_{k} + {t}_{k}z}\right) }\right) \geq {t}_{k}^{-1}{t}_{k} = 1\text{.}
\]
since \( {z}_{k} + {t}_{k}z \in {M}_{k - 1} \) . Hence
(iii) (Trick Alert!) If \( k > j \), then \( {x}_{j} - T\left( {x}_{j}\right) \in {M}_{j - 1} \subset {M}_{k - 1} \) and \( {x}_{j} \in {M}_{j} \subset \) \( {M}_{k - 1} \), so
\[
T\left( {x}_{k}\right) - T\left( {x}_{j}\right) = {x}_{k} - \underset{\text{in }{M}_{k - 1}}{\underbrace{\left( {{x}_{k} - T\left( {x}_{k}\right) }\right) + {x}_{j} - \left( {{x}_{j} - T\left( {x}_{j}\right) }\right) }}\text{, and }
\]
\[
{p}_{U}\left( {T\left( {x}_{k}\right) - T\left( {x}_{j}\right) }\right) = {p}_{U}{\left( {x}_{k} - \left( {x}_{k} - T\left( {x}_{k}\right) + {x}_{j} - \left( {x}_{j} - T\left( {x}_{j}\right) \right) \right) \right) }_{ \geq }1.
\]
Now \( T\left( {x}_{1}\right) ,\ldots, T\left( {x}_{n}\right) \in T\left( U\right) \), which is covered by the sets \( {w}_{l} + \frac{1}{2}\operatorname{int}\left( U\right) \) . If both \( T\left( {x}_{j}\right) \) and \( T\left( {x}_{k}\right) \) belong to \( {w}_{l} + \frac{1}{2}\operatorname{int}\left( U\right) \), then \( T\left( {x}_{k}\right) - {w}_{l} \in \frac{1}{2}\operatorname{int}\left( U\right) \), so \( {p}_{U}\left( {T\left( {x}_{k}\right) - {w}_{l}}\right) < \frac{1}{2} \) . Similarly, \( {p}_{U}\left( {T\left( {x}_{j}\right) - {w}_{l}}\right) < \frac{1}{2} \), so
\[
{p}_{U}\left( {T\left( {x}_{k}\right) - T\left( {x}_{j}\right) }\right) \leq {p}_{U}\left( {T\left( {x}_{k}\right) - {w}_{l} + {w}_{l} - T\left( {x}_{j}\right) }\right)
\]
\[
\leq {p}_{U}\left( {T\left( {x}_{k}\right) - {w}_{l}}\right) + {p}_{U}\left( {{w}_{l} - T\left( {x}_{j}\right) }\right) < 1.
\]
Since this cannot happen: The points \( T\left( {x}_{1}\right) ,\ldots, T\left( {x}_{n}\right) \) must belong to distinct sets \( {w}_{1} + \frac{1}{2}\operatorname{int}\left( U\right) ,\ldots ,{w}_{N} + \frac{1}{2}\operatorname{int}\left( U\right) \) . By the pigeon hole principle, \( n \leq N \) .
Proof of Proposition 5.42: This is done using a series of steps.
Step 1: \( \dim {K}_{1} \leq N \) . Suppose \( {v}_{1},\ldots ,{v}_{n} \) is a finite, linearly independent subset of \( {K}_{1} \) . Set \( {M}_{k} = \operatorname{span}\left\{ {{v}_{1},\ldots ,{v}_{k}}\right\} \) . Since \( \left( {I - T}\right) {M}_{k} = \{ 0\} \), these spaces satisfy the hypotheses of Lemma 5.43, so \( n \leq N \) . Since \( N \) is an upper bound for any finite linearly independent subset of \( {K}_{1} \), and \( {K}_{1} \) does have a basis (which, if infinite, will have arbitrarily large finite subsets), \( {K}_{1} \) must be finite dimensional, with dimension \( \leq N \) .
Step 2: \( \dim \left( {{K}_{j + 1}/{K}_{j}}\right) \leq \dim \left( {{K}_{j}/{K}_{j - 1}}\right) \) . Consider the composite map:
\[
{K}_{j + 1}\overset{\left( I - T\right) }{ \rightarrow }{K}_{j}\overset{\pi }{ \rightarrow }{K}_{j}/{K}_{j - 1}
\]
The kernel is
\[
\left\{ {x \in {K}_{j + 1} : \left( {I - T}\right) \left( x\right) \in {K}_{j - 1}}\right\} = \left\{ {x \in {K}_{j + 1} : {\left( I - T\right) }^{j - 1}\left( {I - T}\right) \left( x\right) = 0}\right\} = {K}_{j},
\]
so \( \dim \left( {{K}_{j + 1}/{K}_{j}}\right) \) equals the dimension of the image of the composite, which (as a subspace) has dimension \( \leq \dim \left( {{K}_{j}/{K}_{j - 1}}\right) \) .
Step 3: Every \( {K}_{j} \) is finite-dimensional. Induction on \( j \) . Step 1 gives the \( j = 1 \) case, while Step 2 provides the induction step.
Proof for part (a): Set \( {M}_{j} = {K}_{j} \), now known to be finite-dimensional. Once \( {K}_{j} = {K}_{j - 1} \), you get \( {K}_{j + 1} = {K}_{j} \) by Step 2, so it stabilizes beyond some \( n \), with \( {K}_{n - 1} \neq {K}_{n} = {K}_{n + 1}\cdots \) (unless all \( {K}_{j} = \{ 0\} \), in which case Proposition 5.42 is trivial). By Lemma 5.43, \( n \leq N \) .
Now set \( K = {K}_{N} = {K}_{N + 1} = \cdots \) .
Step 4: \( \dim \left( K\right) \leq N \) (proving part (b)). Start with a basis of \( {K}_{1} : {v}_{1},\ldots ,{v}_{l} \) . E
|
Proposition 5.42. Suppose \( X \) is a Hausdorff locally convex space, \( T : X \rightarrow X \) is a continuous linear transformation, and \( U \) is a barrel neighborhood of 0 subject to:
(α) \( T\left( U\right) \) does not contain a nontrivial subspace of \( X \), and
(β) \( T\left( U\right) \) is covered by \( N \) translates of \( \frac{1}{2}\operatorname{int}\left( U\right) \) ; that is there exists \( {w}_{1},\ldots ,{w}_{N} \in \) X for which
\[
T\left( U\right) \subset \mathop{\bigcup }\limits_{{j = 1}}^{N}\left\lbrack {{w}_{j} + \frac{1}{2}\operatorname{int}\left( U\right) }\right\rbrack .
\]
Consider the chain of subspaces \( {K}_{j} = \ker {\left( I - T\right) }^{j} \) :
\[
\{ 0\} \subset {K}_{1} \subset {K}_{2} \subset \cdots
\]
Then:
(a) The chain stabilizes beyond \( j = N : {K}_{N} = {K}_{N + 1} = \cdots \) ;
(b) Every \( {K}_{j} \) has dimension \( \leq N \) ; and
(c) \( \dim {K}_{1} = \dim \ker \left( {I - T}\right) \leq \dim \left( {X/\left( {I - T}\right) \left( X\right) }\right) \) .
| |
Example 6.4.4. Example 6.4.3(3) allows us to define a family of orientations on complex projective spaces.
We refer to the orientations obtained in this way as the standard orientations, and the ones with the opposite sign for \( \left\lbrack {\mathbb{C}{P}^{n}}\right\rbrack \) as the nonstandard orientations.
We begin with \( n = 1 \) . We have the standard generator \( {\sigma }_{1} \in {H}_{1}\left( {S}^{1}\right) \) of Remark 4.1.10.
To define an orientation of \( \mathbb{C}{P}^{1} \) it suffices to give a local orientation \( {\bar{\varphi }}_{{z}_{0}} \) at a single point \( {z}_{0} \), and we choose \( {z}_{0} \) to be the point with homogeneous coordinates \( \left\lbrack {0,1}\right\rbrack \) . We specify \( {\bar{\varphi }}_{{z}_{0}} \) by letting \( {\bar{\varphi }}_{{z}_{0}}\left( 1\right) \) be the image of \( {\sigma }_{1} \) under the sequence of isomorphisms
\[
{H}_{1}\left( {S}^{1}\right) \rightarrow {H}_{1}\left( {\mathbb{C}-\{ 0\} }\right) \rightarrow {H}_{2}\left( {\mathbb{C},\mathbb{C}-\{ 0\} }\right) \rightarrow {H}_{2}\left( {\mathbb{C}{P}^{1},\mathbb{C}{P}^{1}-\{ \left\lbrack {0,1}\right\rbrack \} }\right) .
\]
Here the first isomorphism is induced by inclusion, the second is the inverse of the boundary map in the exact sequence of the pair \( \left( {\mathbb{C},\mathbb{C}-\{ 0\} }\right) \), and the third is induced by the map \( z \mapsto \left\lbrack {z,1}\right\rbrack \) .
Given this orientation we have a fundamental class \( \left\lbrack {\mathbb{C}{P}^{1}}\right\rbrack \in {H}_{2}\left( {\mathbb{C}{P}^{1}}\right) \), and we let \( \alpha = \left\{ {\mathbb{C}{P}^{1}}\right\} \) be the fundamental cohomology class. Then for \( n > 1 \), we choose the orientation which has \( \left\{ {\mathbb{C}{P}^{n}}\right\} = {\alpha }^{n} \) as fundamental cohomology class, i.e., the orientation with fundamental class \( \left\lbrack {\mathbb{C}{P}^{1}}\right\rbrack \) specified by \( e\left( {{\alpha }^{n},\left\lbrack {\mathbb{C}{P}^{n}}\right\rbrack }\right) = 1 \) .
We then write \( \mathbb{C}{P}^{n} \) for \( \mathbb{C}{P}^{n} \) the oriented manifold with the standard orientation and \( \overline{\mathbb{C}{P}^{n}} \) for \( \mathbb{C}{P}^{n} \) the oriented manifold with the nonstandard orientation.
(The standard orientations may be obtained directly by specifying local orientations in a completely analogous manner, beginning with the standard generator \( {\sigma }_{{2n} - 1}\left( {S}^{{2n} - 1}\right) \), but for our purposes the homological description is much more to the point.)
Theorem 6.4.5. Let \( M \) be a compact \( n \) -dimensional manifold with \( n \) odd. Then the Euler characteristic \( \chi \left( M\right) = 0 \) .
Proof. We may use any coefficients to compute the Euler characteristic, so we choose \( \mathbb{Z}/2\mathbb{Z} \) . This means that \( M \) is orientable with these coefficients. Also, they form a field, so for any \( j,{H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \) and \( {H}^{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \) are dual vector spaces and hence have the same dimension. Let \( n = {2m} + 1 \) . We compute
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{j = 0}}^{n}{\left( -1\right) }^{j}\dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
= \mathop{\sum }\limits_{{j = 0}}^{m}{\left( -1\right) }^{j}\dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) + \mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) .
\]
But by Poincaré duality
\[
\mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) = \mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}^{{2m} + 1 - k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
= \mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}_{{2m} + 1 - k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
= \mathop{\sum }\limits_{{j = 0}}^{m}{\left( -1\right) }^{{2m} + 1 - j}\dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
where \( j = {2m} + 1 - k \) (and so \( k = {2m} + 1 - j \) ).
Thus
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{j = 0}}^{m}\left( {{\left( -1\right) }^{j} + {\left( -1\right) }^{{2m} + 1 - j}}\right) \dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) .
\]
But \( {\left( -1\right) }^{j} \) and \( {\left( -1\right) }^{{2m} + 1 - j} \) always have opposite signs, so this sum is identically zero.
Now we have an interesting application of Lefschetz duality.
Theorem 6.4.6. Let \( M \) be a compact \( n \) -manifold with odd Euler characteristic. Then \( M \) is not the boundary of a compact \( \left( {n + 1}\right) \) -manifold.
Proof. Suppose that \( M \) is the boundary of the compact \( \left( {n + 1}\right) \) -manifold \( X \) . Consider the exact sequence of the pair \( \left( {X, M}\right) \) :
\[
0 \rightarrow {H}_{n + 1}\left( {X;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow {H}_{n + 1}\left( {X, M;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow {H}_{n}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow
\]
\[
\cdots \rightarrow {H}_{0}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow {H}_{0}\left( {X;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow {H}_{0}\left( {X, M;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow 0.
\]
Then the alternating sum of the dimensions of the homology groups in this sequence is zero, and hence the sum of the dimensions is even. Thus
\[
\mathop{\sum }\limits_{{k = 0}}^{n}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) + \mathop{\sum }\limits_{{k = 0}}^{{n + 1}}\dim {H}_{k}\left( {X;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
+ \mathop{\sum }\limits_{{k = 0}}^{{n + 1}}\dim {H}_{k}\left( {X, M;\mathbb{Z}/2\mathbb{Z}}\right) \equiv 0\left( {\;\operatorname{mod}\;2}\right) .
\]
But, on the other hand,
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \equiv \mathop{\sum }\limits_{{k = 0}}^{n}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \left( {\;\operatorname{mod}\;2}\right) ,
\]
and on the other hand, by Lefschetz duality,
\[
\mathop{\sum }\limits_{{k = 0}}^{{n + 1}}\dim {H}_{k}\left( {X;\mathbb{Z}/2\mathbb{Z}}\right) = \mathop{\sum }\limits_{{k = 0}}^{{n + 1}}\dim {H}_{k}\left( {X, M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
since for every value of \( k \) ,
\[
\dim {H}_{k}\left( {X;\mathbb{Z}/2\mathbb{Z}}\right) = \dim {H}^{n + 1 - k}\left( {X, M;\mathbb{Z}/2\mathbb{Z}}\right) = \dim {H}_{n + 1 - k}\left( {X, M;\mathbb{Z}/2\mathbb{Z}}\right) .
\]
Thus \( \chi \left( M\right) \) is even, a contradiction.
Now we turn our attention to orientable manifolds. We will be considering an important invariant, the intersection form. In order to most conveniently do that we introduce some (nonstandard) notation.
Definition 6.4.7. Let \( M \) be a compact connected manifold of dimension \( {2n} \) . Then
\[
{K}^{n}\left( {M;\mathbb{Z}}\right) = {H}^{n}\left( {M;\mathbb{Z}}\right) /{H}^{n}{\left( M;\mathbb{Z}\right) }_{\text{tor }},
\]
i.e., the quotient of \( {H}^{n}\left( {M;\mathbb{Z}}\right) \) by its torsion subgroup, and
\[
{K}^{n}\left( {M;\mathbb{F}}\right) = {H}^{n}\left( {M;\mathbb{F}}\right)
\]
if \( \mathbb{F} \) is a field.
Theorem 6.4.8. Let \( M \) be a G-oriented compact connected manifold of even dimension \( {2n} \), with fundamental class \( \left\lbrack M\right\rbrack \in {H}_{2n}\left( {M;G}\right) \), where \( G = \mathbb{Z} \) or a field \( \mathbb{F} \) . Then
\[
\langle ,\rangle : {K}^{n}\left( {M;G}\right) \otimes {K}^{n}\left( {M;G}\right) \rightarrow G
\]
given by
\[
\langle u, v\rangle = e\left( {u \cup v,\left\lbrack M\right\rbrack }\right)
\]
is a nonsingular bilinear form. It is symmetric if \( n \) is even, i.e., if \( \dim \left( M\right) \equiv \) 0 \( \left( {\;\operatorname{mod}\;4}\right) \), and is skew-symmetric if \( n \) is odd, i.e., if \( \dim \left( M\right) \equiv 2\left( {\;\operatorname{mod}\;4}\right) \) .
Proof. First observe that this form is symmetric for \( n \) even and skew-symmetric for \( n \) odd as we have \( u \cup v = {\left( -1\right) }^{{n}^{2}}\left( {v \cup u}\right) \) .
Suppose that \( G = \mathbb{F} \) is a field. By Poincaré duality,
\[
\cap \left\lbrack M\right\rbrack : {H}^{n}\left( {M;\mathbb{F}}\right) \rightarrow {H}_{n}\left( {M;\mathbb{F}}\right)
\]
is an isomorphism.
By the universal coefficient theorem, Theorem 5.5.19,
\[
e : {H}_{n}\left( {M;\mathbb{F}}\right) \rightarrow \operatorname{Hom}\left( {{H}^{n}\left( {M;\mathbb{F}}\right) ,\mathbb{F}}\right) .
\]
is an isomorphism. Hence the composition
\[
e\left( {\cap \left\lbrack M\right\rbrack }\right) : {H}^{n}\left( {M;\mathbb{F}}\right) \rightarrow \operatorname{Hom}\left( {{H}^{n}\left( {M;\mathbb{F}}\right) ,\mathbb{F}}\right) .
\]
is an isomorphism. But this composition is given by, using Theorem 5.6.13(7),
\[
e\left( {\cap \left\lbrack M\right\rbrack }\right) \left( v\right) \left( u\right) = e\left( {u, v \cap \left\lbrack M\right\rbrack }\right) = e\left( {u \cup v,\left\lbrack M\right\rbrack }\right) .
\]
In the language of Definition B.1.3, this shows that the map \( \beta \) for this form is an isomorphism, and then by Remark B.1.6 we have that the form is nonsingular.
Now consider the case \( G = \mathbb{Z} \) . First note that this bilinear form is well-defined on \( {K}^{n}\left( {M;\mathbb{Z}}\right) \otimes {K}^{n}\left( {M;\mathbb{Z}}\right) \), as if \( u \) is a torsion class, with \( {ru} = 0 \), say, and \( v \) is any class, then \( 0 = 0 \cup v = \left( {ru}\right) \cup v = r\left( {u \cup v}\right) \) so \( u \cup v = 0 \) as \( {H}^{2n}\left( {M;\mathbb{Z}}\right) \) is a free abelian group, and similarly for \( v \cup u \) .
Write \( {H}^{n}\left( {M;\mathbb{Z}}\right) \) as \( F \oplus T \) when \( F \) is a free abelian group and \( T \) is the torsion subgroup. ( \( F \) is in general not unique, but simp
|
(Theorem 6.4.5. Let \( M \) be a compact \( n \) -dimensional manifold with \( n \) odd. Then the Euler characteristic \( \chi \left( M\right) = 0 \) .)
|
We may use any coefficients to compute the Euler characteristic, so we choose \( \mathbb{Z}/2\mathbb{Z} \) . This means that \( M \) is orientable with these coefficients. Also, they form a field, so for any \( j,{H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \) and \( {H}^{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \) are dual vector spaces and hence have the same dimension. Let \( n = {2m} + 1 \) . We compute
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{j = 0}}^{n}{\left( -1\right) }^{j}\dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
= \mathop{\sum }\limits_{{j = 0}}^{m}{\left( -1\right) }^{j}\dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) + \mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) .
\]
But by Poincaré duality
\[
\mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}_{k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) = \mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}^{{2m} + 1 - k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
= \mathop{\sum }\limits_{{k = m + 1}}^{{{2m} + 1}}{\left( -1\right) }^{k}\dim {H}_{{2m} + 1 - k}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
\[
= \mathop{\sum }\limits_{{j = 0}}^{m}{\left( -1\right) }^{{2m} + 1 - j}\dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right)
\]
where \( j = {2m} + 1 - k \) (and so \( k = {2m} + 1 - j \) ).
Thus
\[
\chi \left( M\right) = \mathop{\sum }\limits_{{j = 0}}^{m}\left( {{\left( -1\right) }^{j} + {\left( -1\right) }^{{2m} + 1 - j}}\right) \dim {H}_{j}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) .
\]
But \( {\left( -1\right) }^{j} \) and \( {\left( -1\right) }^{{2m} + 1 - j} \) always have opposite signs, so this sum is identically zero.
|
Exercise 8.8. In the setting of Sect. 8.1, the integer program (8.1) can be written equivalently as
\[
{z}_{I} = \max \;{cx}
\]
\[
x - y = 0
\]
\[
{A}_{1}x \leq {b}^{1}
\]
\[
{A}_{2}y \leq {b}^{2}
\]
\[
{x}_{j},{y}_{j} \in \mathbb{Z}\text{ for }j = 1,\ldots, p
\]
\[
x, y \geq 0
\]
Let \( \bar{z} \) be the optimal solution of the Lagrangian dual obtained by dualizing the constraints \( x - y = 0 \) . Prove that
\[
\bar{z} = \max \left\{ {{cx} : x \in \operatorname{conv}\left( {Q}_{1}\right) \cap \operatorname{conv}\left( {Q}_{2}\right) }\right\}
\]
where \( {Q}_{i} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{Z}}_{ + }^{p} \times {R}_{ + }^{n - p} : {A}_{i}x \leq {b}^{i}}\right\}, i = 1,2 \), assuming that \( \operatorname{conv}\left( {Q}_{1}\right) \cap \) \( \operatorname{conv}\left( {Q}_{2}\right) \) is nonempty.
Exercise 8.9. Show that, for every convex function \( g : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) and every \( {\lambda }^{ * } \in {\mathbb{R}}^{n} \), there exists a subgradient of \( g \) at \( {\lambda }^{ * } \) .
Exercise 8.10. Construct an example of a convex function \( g : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) such that some subgradients at a point \( {\lambda }^{ * } \in {\mathbb{R}}^{n} \) are directions of ascent, whereas other subgradients are directions of descent. (A direction of ascent (resp. descent) at \( {\lambda }^{ * } \) is a vector \( s \in {\mathbb{R}}^{n} \) for which there exists \( \epsilon > 0 \) such that \( g\left( {{\lambda }^{ * } + {ts}}\right) > g\left( {\lambda }^{ * }\right) \) (resp. \( \left. {g\left( {{\lambda }^{ * } + {ts}}\right) < g\left( {\lambda }^{ * }\right) }\right) \) for all \( 0 < t < \epsilon \) .)
Exercise 8.11. Show that, if \( \left( {\alpha }_{t}\right) \) is a nonnegative sequence such that \( \mathop{\sum }\limits_{{t = 1}}^{{+\infty }}{\alpha }_{t} \) is finite, then the subgradient algorithm converges to some point.
Construct an example of a convex function, a sequence \( \left( {\alpha }_{t}\right) \) as above, and a starting point for which the subgradient algorithm converges to a point that is not optimal.
Exercise 8.12. Suppose we apply the subgradient method to solve the Lagrangian dual \( \mathop{\min }\limits_{{\lambda \in {\mathbb{R}}^{m}}}{z}_{LR}\left( \lambda \right) \), where \( {z}_{LR}\left( \lambda \right) \) is the Lagrangian relaxation (8.6) for the uncapacitated facility location problem.
1. Specialize each of the steps \( 1 - 3 \) of the subgradient algorithm to this case.
2. In each iteration \( t \), let \( \left( {x\left( {\lambda }^{t}\right), y\left( {\lambda }^{t}\right) }\right) \) be the optimal solution for (8.6) given in Proposition 8.7. Describe the best solution of (8.5) when each \( {x}_{j} \) is fixed to \( {x}_{j}\left( {\lambda }^{t}\right), j = 1,\ldots, n \) .
3. Point (2) gives a lower bound for (8.5). Can you use it to introduce an additional stopping criterion in the subgradient algorithm.
Exercise 8.13. In the context of the uncapacitated facility location problem, consider the function \( z \) defined as follows for any \( x \in {\left\lbrack 0,1\right\rbrack }^{n} \) such that \( \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} \geq 1 \) .
\[
\begin{array}{ll} z\left( x\right) : = \max & \mathop{\sum }\limits_{{i = 1}}^{m}\mathop{\sum }\limits_{{j = 1}}^{n}{c}_{ij}{y}_{ij} - \mathop{\sum }\limits_{{j = 1}}^{n}{f}_{j}{x}_{j} \end{array}
\]
\[
\mathop{\sum }\limits_{{j = 1}}^{n}{y}_{ij} = 1
\]
\[
{y}_{ij} \leq {x}_{j}\;\text{for all}i, j
\]
\[
y \geq 0\text{.}
\]
1. Prove that the function \( z \) is concave in the domain over which it is defined.
2. Determine a subgradient of \( z \) for any point in the set \( S \mathrel{\text{:=}} \left\{ {x \in {\left\lbrack 0,1\right\rbrack }^{n}}\right. \) : \( \left. {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} \geq 1}\right\} \) .
3. Specialize the subgradient algorithm to solve \( \mathop{\max }\limits_{{x \in S}}z \) .
4. Show that \( \mathop{\max }\limits_{{x \in S}}z \) is equal to \( {z}_{LP} \) obtained by solving the linear programming relaxation of (8.5).
Exercise 8.14. Give a Dantzig-Wolfe reformulation of the uncapacitated facility location problem (8.5) based on the set \( Q \mathrel{\text{:=}} \{ \left( {x, y}\right) \in \{ 0,1{\} }^{n} \times \) \( {\mathbb{R}}^{m \times n} : {y}_{ij} \leq {x}_{j} \) for all \( \left. {i, j}\right\} \) . [Hint: For each nonempty set \( S \subseteq \{ 1,\ldots, m\} \) and \( j \in \{ 1,\ldots, n\} \), let \( {\lambda }_{S}^{j} = 1 \) if a facility located at site \( j \) satisfies the demand of all clients in the set \( S \), and 0 otherwise.]
Exercise 8.15. Consider the formulation for the network design problem given in Sect. 2.10.2.
1. Use the block diagonal structure, where each block corresponds to an arc \( a \in A \), to derive a Dantzig-Wolfe reformulation, as described in Sect. 8.2.1.
2. Use the block diagonal structure, where each block corresponds to a commodity \( k = 1,\ldots, K \), to derive a different Dantzig-Wolfe reformulation. (The reformulation will have a variable for every possible \( \left. {{s}_{k},{t}_{k}\text{-path,}k = 1,\ldots, K\text{.}}\right) \)
3. For each of these Dantzig-Wolfe reformulations, describe the pricing problem to solve the corresponding relaxation using column generation.
Exercise 8.16. In (8.26), replace the inequality constraints \( {Ax} + {Gy} \leq b \) by equality constraints \( {Ax} + {Gy} = b \) . Explain how Theorem 8.18 and its proof must be modified.
Exercise 8.17. Let \( X \subset {\mathbb{R}}^{n} \) . Given \( f : X \rightarrow \mathbb{R} \) and \( F : X \rightarrow {\mathbb{R}}^{m} \), prove that the optimization problem
\[
{z}_{I} \mathrel{\text{:=}} \max \;f\left( x\right) + {hy}
\]
\[
F\left( x\right) + {Gy} \leq b
\]
\[
x \in X
\]
\[
y \in {\mathbb{R}}_{ + }^{p}
\]
can be reformulated in a form similar to (8.26) with \( {cx} \) and \( {Ax} \) replaced by \( f\left( x\right) \) and \( F\left( x\right) \) respectively.
Exercise 8.18. Consider a problem of the form
\[
{z}_{I} \mathrel{\text{:=}} \max \;{cx}
\]
\[
{Ax} + {Gy} \leq b
\]
\[
x\; \in \;X
\]
\[
y \in {\mathbb{R}}_{ + }^{p}\text{.}
\]
where \( X \subset {\mathbb{R}}^{n} \) . Show that its Benders reformulation is of the form
\[
{z}_{I} = \max \;{cx}
\]
\[
{r}^{j}\left( {b - {Ax}}\right) \geq 0\text{ for all }j \in J
\]
\[
x \in X\text{.}
\]
where \( {\left\{ {r}^{j}\right\} }_{j \in J} \) is the set of extreme rays of the cone \( C \mathrel{\text{:=}} \left\{ {u \in {\mathbb{R}}_{ + }^{m} : {uG} \geq 0}\right\} \) .
Exercise 8.19. Consider a problem of the form
\[
{z}_{I} \mathrel{\text{:=}} \max \;{cx} + \mathop{\sum }\limits_{{i = 1}}^{m}{h}^{i}{y}^{i}
\]
\[
{A}_{i}x + {G}_{i}{y}^{i} \leq {b}^{i}\;i = 1,\ldots, m
\]
(8.32)
\[
x \in X
\]
\[
{y}^{i} \in {\mathbb{R}}_{ + }^{{p}_{i}}\;i = 1,\ldots, m
\]
where \( X \subset {\mathbb{R}}^{n} \) .
For \( i = 1,\ldots, m \), let \( {\left\{ {u}^{ik}\right\} }_{k \in {K}_{i}} \) denote the set of extreme points of the polyhedron \( {Q}_{i} \mathrel{\text{:=}} \left\{ {{u}^{i} \geq 0 : {u}^{i}{G}_{i} \geq {h}^{i}}\right\} \), and let \( {\left\{ {r}^{ij}\right\} }_{j \in {J}_{i}} \) be the set of extreme rays of the cone \( {C}_{i} \mathrel{\text{:=}} \left\{ {{u}^{i} \geq 0 : {u}^{i}{G}_{i} \geq 0}\right\} \) .
(i) Prove that problem (8.32) can be reformulated as
\[
{z}_{I} = \max \mathop{\sum }\limits_{i}{\eta }_{i} + {cx}
\]
\[
{\eta }_{i} \leq {u}^{ik}\left( {{b}^{i} - {A}_{i}x}\right) \text{ for all }k \in {K}_{i}, i = 1,\ldots, m
\]
\[
{r}^{ij}\left( {{b}^{i} - {A}_{i}x}\right) \geq 0\;\text{ for all }j \in {J}_{i}, i = 1,\ldots, m
\]
\[
x \in X,\;\eta \in {\mathbb{R}}^{m}.
\]
(ii) Prove that in the standard Benders reformulation (8.27) of (8.32),
\( \left| K\right| = \left| {K}_{1}\right| \times \left| {K}_{2}\right| \times \cdots \times \left| {K}_{m}\right| \) and \( \left| J\right| = \left| {J}_{1}\right| + \left| {J}_{2}\right| + \cdots + \left| {J}_{m}\right| \) .
Exercise 8.20. The goal of this exercise is to find a Benders reformulation of the uncapacitated facility location problem.
\[
\min \;\sum \sum {c}_{ij}{y}_{ij}\; + \;\sum {f}_{j}{x}_{j}
\]
\[
\mathop{\sum }\limits_{j}{y}_{ij} = 1\;i = 1,\ldots, m
\]
\[
{y}_{ij} \leq {x}_{j}\;i = 1,\ldots, m, j = 1,\ldots, n
\]
\[
y \geq 0, x \in \{ 0,1{\} }^{n}.
\]
(i) Show that, for every \( x \in \{ 0,1{\} }^{n} \), the Benders subproblem can be written as \( {z}_{LP}\left( x\right) = \mathop{\sum }\limits_{{i = 1}}^{m}{z}_{LP}^{i}\left( x\right) \), where
\[
{z}_{LP}^{i}\left( x\right) \mathrel{\text{:=}} \min \;\mathop{\sum }\limits_{j}{c}_{ij}{y}_{ij}
\]
\[
\mathop{\sum }\limits_{j}{y}_{ij} = 1
\]
\[
\begin{array}{llll} {y}_{ij} & \leq & {x}_{j} & j = 1,\ldots, n \end{array}
\]
\[
y \geq 0\text{.}
\]
(ii) Characterize the extreme points and extreme rays of the polyhedron \( {Q}_{i} \mathrel{\text{:=}} \left\{ {\left( {{u}_{i},{w}_{i}}\right) \in \mathbb{R} \times {\mathbb{R}}_{ + }^{n} : {u}_{i} - {w}_{ij} \leq {c}_{ij}}\right\}, i = 1,\ldots, m. \)
(iii) Deduce from (i) and (ii) that the uncapacitated facility location problem can be reformulated as
\( \min \;\mathop{\sum }\limits_{i}{\eta }_{i} + \mathop{\sum }\limits_{j}{f}_{j}{x}_{j} \)
\[
{\eta }_{i} \geq {c}_{ik} - \mathop{\sum }\limits_{j}{\left( {c}_{ik} - {c}_{ij}\right) }^{ + }{x}_{j}\;i = 1,\ldots, m, k = 1,\ldots, n
\]
\( \mathop{\sum }\limits_{j}{x}_{j} \geq 1 \)
\[
x \in \{ 0,1{\} }^{n}.
\]

The authors on a hike
## Chapter 9
## Enumeration
The goal of this chapter is threefold. First we present a polynomial algorithm for integer programming in fixed dimension. This algorithm is based on elegant ideas such as basis reduction and the flatness theorem. Second we revisit branch-and-cut, the most successful approach in practice for a wide range
|
In the setting of Sect. 8.1, the integer program (8.1) can be written equivalently as
\[
{z}_{I} = \max \;{cx}
\]
\[
x - y = 0
\]
\[
{A}_{1}x \leq {b}^{1}
\]
\[
{A}_{2}y \leq {b}^{2}
\]
\[
{x}_{j},{y}_{j} \in \mathbb{Z}\text{ for }j = 1,\ldots, p
\]
\[
x, y \geq 0
\]
Let \( \bar{z} \) be the optimal solution of the Lagrangian dual obtained by dualizing the constraints \( x - y = 0 \) . Prove that
\[
\bar{z} = \max \left\{ {{cx} : x \in \operatorname{conv}\left( {Q}_{1}\right) \cap \operatorname{conv}\left( {Q}_{2}\right) }\right\}
\]
where \( {Q}_{i} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{Z}}_{ + }^{p} \times {R}_{ + }^{n - p} : {A}_{i}x \leq {b}^{i}}\right\}, i = 1,2 \), assuming that \( \operatorname{conv}\left( {Q}_{1}\right) \cap \) \( \operatorname{conv}\left( {Q}_{2}\right) \) is nonempty.
| |
Proposition 2.7 1. If \( \left( {T, S}\right) \) satisfies condition \( \left( \mathrm{C}\right) \), then \( T * S = S * T \) . 2. If \( \left( {{T}_{1},\ldots ,{T}_{n}}\right) \) satisfies \( \left( \mathrm{C}\right) \), then
\[
\operatorname{Supp}\left( {{T}_{1} * \cdots * {T}_{n}}\right) \subset \operatorname{Supp}{T}_{1} + \cdots + \operatorname{Supp}{T}_{n}
\]
3. \( \delta * T = T * \delta \) for all \( T \in {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \) .
Proof. The second part of Proposition 2.6 allows us, by passing to the limit, to reduce the problem to the case of distributions with compact support, for which these properties were stated in Proposition 2.2. The reasoning is straightforward for the proof of parts 1 and 3 . We spell it out for part 2 .
If \( \left( {{T}_{1},\ldots ,{T}_{n}}\right) \) satisfies (C), then, by property 6 on page 326, the set \( F = \operatorname{Supp}{T}_{1} + \cdots + \operatorname{Supp}{T}_{n} \) is closed. On the other hand, if \( l > 0 \), we have \( \operatorname{Supp}\left( {{\rho }_{l}{T}_{j}}\right) \subset \operatorname{Supp}{T}_{j} \) for every \( j \in \{ 1,\ldots, d\} \) (in the notation of Proposition 2.6); thus, by Proposition 2.2, \( \operatorname{Supp}\left( {\left( {{\rho }_{l}{T}_{1}}\right) * \cdots * \left( {{\rho }_{l}{T}_{n}}\right) }\right) \subset \) \( F \) . We deduce that, for every \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) satisfying \( \operatorname{Supp}\varphi \subset {\mathbb{R}}^{d} \smallsetminus F \) , Proposition 2.6 yields
\[
\left\langle {{T}_{1} * \cdots * {T}_{n},\varphi }\right\rangle = \mathop{\lim }\limits_{{l \rightarrow + \infty }}\left\langle {\left( {{\rho }_{l}{T}_{1}}\right) * \cdots * \left( {{\rho }_{l}{T}_{n}}\right) ,\varphi }\right\rangle = 0.
\]
Therefore \( {\mathbb{R}}^{d} \smallsetminus F \) is a domain of nullity of \( {T}_{1} * \cdots * {T}_{n} \), which proves part 2 of the proposition.
Proposition 2.8 (Continuity) Let \( {\left( {T}_{n}\right) }_{n \in \mathbb{N}} \) be a sequence in \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \) , and let \( T, S \) belong to \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \) . Suppose that the sequence \( {\left( {T}_{n}\right) }_{n \in \mathbb{N}} \) converges
to \( T \) in \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \), that there exists a closed set \( F \) in \( {\mathbb{R}}^{d} \) such that \( \operatorname{Supp}{T}_{n} \subset F \) for all \( n \in \mathbb{N} \), and that \( \left( {F,\operatorname{Supp}S}\right) \) satisfies (C). Then
\[
\mathop{\lim }\limits_{{n \rightarrow + \infty }}{T}_{n} * S = T * S
\]
in \( {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \) .
Proof. Take \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) . As above, write \( \widehat{\varphi }\left( {x, y}\right) = \varphi \left( {x + y}\right) \) . Since the family \( \left( {F,\operatorname{Supp}S}\right) \) satisfies (C), the intersection \( \operatorname{Supp}\widehat{\varphi } \cap \left( {F \times \operatorname{Supp}S}\right) \) is compact. Let \( \rho \in \mathcal{D}\left( {{\mathbb{R}}^{d} \times {\mathbb{R}}^{d}}\right) \) satisfy \( \rho = 1 \) on an open set that contains this compact. Then, by definition,
\[
\left\langle {{T}_{n} * S,\varphi }\right\rangle = \left\langle {{\left( {T}_{n}\right) }_{x},\left\langle {{S}_{y},\rho \left( {x, y}\right) \widehat{\varphi }\left( {x, y}\right) }\right\rangle }\right\rangle .
\]
Since the map \( x \mapsto \left\langle {{S}_{y},\rho \left( {x, y}\right) \widehat{\varphi }\left( {x, y}\right) }\right\rangle \) belongs to \( \mathcal{D}\left( {\mathbb{R}}^{d}\right) \), we deduce that
\[
\mathop{\lim }\limits_{{n \rightarrow + \infty }}\left\langle {{T}_{n} * S,\varphi }\right\rangle = \left\langle {{T}_{x},\left\langle {{S}_{y},\rho \left( {x, y}\right) \widehat{\varphi }\left( {x, y}\right) }\right\rangle }\right\rangle = \langle T * S,\varphi \rangle
\]
which is the desired result.
Obviously, this result extends to families \( \left( {T}_{\lambda }\right) \), with \( \lambda \rightarrow {\lambda }_{0} \) (where \( \lambda \) runs over a subset of \( \mathbb{R} \) and \( \left. {{\lambda }_{0} \in \left\lbrack {-\infty ,\infty }\right\rbrack }\right) \) .
The next proposition explicitly defines the convolution product.
Proposition 2.9 Suppose \( \left( {T, S}\right) \) satisfies property \( \left( \mathrm{C}\right) \) . Then, for every \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \), the function \( \widetilde{\varphi } \) on \( {\mathbb{R}}^{d} \) defined by
\[
\widetilde{\varphi }\left( x\right) = \left\langle {{S}_{y},\varphi \left( {x + y}\right) }\right\rangle
\]
belongs to \( \mathcal{E}\left( {\mathbb{R}}^{d}\right) \), the intersection \( \operatorname{Supp}\widetilde{\varphi } \cap \operatorname{Supp}T \) is compact, and
\[
\langle T * S,\varphi \rangle = \langle T,\widetilde{\varphi }\rangle = \left\langle {{T}_{x},\left\langle {{S}_{y},\varphi \left( {x + y}\right) }\right\rangle }\right\rangle .
\]
Proof. Put \( K = \{ \left( {x, y}\right) \in \operatorname{Supp}T \times \operatorname{Supp}S : x + y \in \operatorname{Supp}\varphi \} \) . Then the support of \( \widetilde{\varphi } \) is contained in Supp \( \varphi \) -Supp \( S \) and \( \left( {\operatorname{Supp}\varphi \text{-Supp}S}\right) \cap \operatorname{Supp}T \) is the projection of \( K \) on the first factor. Therefore \( \operatorname{Supp}\widetilde{\varphi } \cap \operatorname{Supp}T \) is compact. At the same time, if \( {\rho }_{l} \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) satisfies \( {\rho }_{l} = 1 \) on \( B\left( {0, l}\right) \), the function
\[
{\rho }_{l}\widetilde{\varphi } : x \mapsto \left\langle {{S}_{y},{\rho }_{l}\left( x\right) \varphi \left( {x + y}\right) }\right\rangle
\]
belongs to \( \mathcal{D}\left( {\mathbb{R}}^{d}\right) \), by Theorem 1.1. Therefore \( \widetilde{\varphi } \) is of class \( {C}^{\infty } \) on \( B\left( {0, l}\right) \) for every \( l > 0 \), which is to say that \( \widetilde{\varphi } \in \mathcal{E}\left( {\mathbb{R}}^{d}\right) \) .
At the same time, by Proposition 2.8,
\[
\langle T * S,\varphi \rangle = \mathop{\lim }\limits_{{l \rightarrow + \infty }}\mathop{\lim }\limits_{{{l}^{\prime } \rightarrow + \infty }}\left\langle {{\rho }_{l}T * {\rho }_{{l}^{\prime }}S,\varphi }\right\rangle
\]
\[
= \mathop{\lim }\limits_{{l \rightarrow + \infty }}\mathop{\lim }\limits_{{{l}^{\prime } \rightarrow + \infty }}\left\langle {{T}_{x},{\rho }_{l}\left( x\right) \left\langle {{S}_{y},{\rho }_{{l}^{\prime }}\left( y\right) \varphi \left( {x + y}\right) }\right\rangle }\right\rangle .
\]
Now, if \( B\left( {0,{l}^{\prime }}\right) \supset \operatorname{Supp}\varphi - \operatorname{Supp}{\rho }_{l} \), we have
\[
\operatorname{Supp}\left( {\varphi \left( {x + \cdot }\right) }\right) \subset B\left( {0,{l}^{\prime }}\right) \;\text{ for every }x \in \operatorname{Supp}{\rho }_{l}.
\]
Therefore \( {\rho }_{{l}^{\prime }}\left( y\right) \varphi \left( {x + y}\right) = \varphi \left( {x + y}\right) \) . We deduce that
\[
\langle T * S,\varphi \rangle = \mathop{\lim }\limits_{{l \rightarrow + \infty }}\left\langle {{T}_{x},{\rho }_{l}\left( x\right) \left\langle {{S}_{y},\varphi \left( {x + y}\right) }\right\rangle }\right\rangle .
\]
By definition, if \( B\left( {0, l}\right) \supset \operatorname{Supp}\widetilde{\varphi } \cap \operatorname{Supp}T \), then
\[
\left\langle {{T}_{x},{\rho }_{l}\left( x\right) \left\langle {{S}_{y},\varphi \left( {x + y}\right) }\right\rangle }\right\rangle = \left\langle {{T}_{x},\left\langle {{S}_{y},\varphi \left( {x + y}\right) }\right\rangle }\right\rangle ,
\]
which proves the result.
This result can be extended to the case where \( T \in {\mathcal{D}}^{\prime m}\left( {\mathbb{R}}^{d}\right), S \in \) \( {\mathcal{D}}^{\prime n}\left( {\mathbb{R}}^{d}\right) \), and \( \varphi \in {\mathcal{D}}^{m + n}\left( {\mathbb{R}}^{d}\right) \) ; see Exercise 7 below.
Corollary 2.10 Let \( f \) and \( g \) be elements of \( {L}_{\mathrm{{loc}}}^{1}\left( {\mathbb{R}}^{d}\right) \) whose supports satisfy condition (C). Then \( f \) and \( g \) are convolvable in the sense of the definition on page 171; moreover \( f * g \in {L}_{\mathrm{{loc}}}^{1}\left( {\mathbb{R}}^{d}\right) \) and
\[
\left\lbrack f\right\rbrack * \left\lbrack g\right\rbrack = \left\lbrack {f * g}\right\rbrack
\]
Proof. For every \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) ,
\[
\iint \left| {f\left( {x - y}\right) }\right| \left| {g\left( y\right) }\right| \left| {\varphi \left( x\right) }\right| {dxdy} = \iint \left| {f\left( x\right) }\right| \left| {g\left( y\right) }\right| \left| {\varphi \left( {x + y}\right) }\right| {dxdy}
\]
(because Lebesgue measure is invariant under translations); the term on the right is finite because the supports of \( f \) and \( g \) satisfy condition (C). This proves that \( f \) and \( g \) are convolvable and that \( f * g \in {L}_{\text{loc }}^{1}\left( {\mathbb{R}}^{d}\right) \) . Moreover, if \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \), we have
\[
\langle \left\lbrack {f * g}\right\rbrack ,\varphi \rangle = \int f\left( x\right) \left( {\int g\left( y\right) \varphi \left( {x + y}\right) {dy}}\right) {dx}
\]
by Fubini’s Theorem, and this quantity equals \( \langle \left\lbrack f\right\rbrack * \left\lbrack g\right\rbrack ,\varphi \rangle \) by Proposition 2.9.
Proposition 2.11 (Associativity) Let \( \left( {{T}_{1},{T}_{2},{T}_{3}}\right) \) be a family of distributions on \( {\mathbb{R}}^{d} \) satisfying (C). The distributions \( \left( {{T}_{1} * {T}_{2}}\right) * {T}_{3} \) and \( {T}_{1} * \left( {{T}_{2} * {T}_{3}}\right) \) are well-defined and coincide.
Proof. By property 1 on page 326, the distributions \( {T}_{1} * {T}_{2} \) and \( {T}_{2} * {T}_{3} \) are well defined and, by Proposition 2.7,
\( \operatorname{Su
|
Proposition 2.7 1. If \( \left( {T, S}\right) \) satisfies condition \( \left( \mathrm{C}\right) \), then \( T * S = S * T \) . 2. If \( \left( {{T}_{1},\ldots ,{T}_{n}}\right) \) satisfies \( \left( \mathrm{C}\right) \), then
\[
\operatorname{Supp}\left( {{T}_{1} * \cdots * {T}_{n}}\right) \subset \operatorname{Supp}{T}_{1} + \cdots + \operatorname{Supp}{T}_{n}
\]
3. \( \delta * T = T * \delta \) for all \( T \in {\mathcal{D}}^{\prime }\left( {\mathbb{R}}^{d}\right) \) .
|
The second part of Proposition 2.6 allows us, by passing to the limit, to reduce the problem to the case of distributions with compact support, for which these properties were stated in Proposition 2.2. The reasoning is straightforward for the proof of parts 1 and 3 . We spell it out for part 2 .
If \( \left( {{T}_{1},\ldots ,{T}_{n}}\right) \) satisfies (C), then, by property 6 on page 326, the set \( F = \operatorname{Supp}{T}_{1} + \cdots + \operatorname{Supp}{T}_{n} \) is closed. On the other hand, if \( l > 0 \), we have \( \operatorname{Supp}\left( {{\rho }_{l}{T}_{j}}\right) \subset \operatorname{Supp}{T}_{j} \) for every \( j \in \{ 1,\ldots, d\} \) (in the notation of Proposition 2.6); thus, by Proposition 2.2, \( \operatorname{Supp}\left( {\left( {{\rho }_{l}{T}_{1}}\right) * \cdots * \left( {{\rho }_{l}{T}_{n}}\right) }\right) \subset \) \( F \) . We deduce that, for every \( \varphi \in \mathcal{D}\left( {\mathbb{R}}^{d}\right) \) satisfying \( \operatorname{Supp}\varphi \subset {\mathbb{R}}^{d} \smallsetminus F \) , Proposition 2.6 yields
\[
\left\langle {{T}_{1} * \cdots * {T}_{n},\varphi }\right\rangle = \mathop{\lim }\limits_{{l \rightarrow + \infty }}\left\langle {\left( {{\rho }_{l}{T}_{1}}\right) * \cdots * \left( {{\rho }_{l}{T}_{n}}\right) ,\varphi }\right\rangle = 0.
\]
Therefore \( {\mathbb{R}}^{d} \smallsetminus F \) is a domain of nullity of \( {T}_{1} * \cdots * {T}_{n} \), which proves part 2 of the proposition.
|
Proposition 5.5.6. Let \( \mathcal{B} \) be a Banach space and \( \left( {X,\mu }\right) \) a \( \sigma \) -finite measure space. (a) The set \( \left\{ {\mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}{u}_{j} : {u}_{j} \in \mathcal{B},{E}_{j} \subseteq X}\right. \) are pairwise disjoint and \( \left. {\mu \left( {E}_{j}\right) < \infty }\right\} \) is dense in \( {L}^{p}\left( {X,\mathcal{B}}\right) \) whenever \( 0 < p < \infty \) .
(b) The set \( \left\{ {\mathop{\sum }\limits_{{j = 0}}^{\infty }{\chi }_{{E}_{j}}{u}_{j} : {u}_{j} \in \mathcal{B},{E}_{j} \subseteq X\text{are pairwise disjoint and}X = { \cup }_{j = 0}^{\infty }{E}_{j}}\right\} \) is dense in \( {L}^{\infty }\left( {X,\mathcal{B}}\right) \) .
(c) The space \( {\mathcal{C}}_{0}^{\infty } \otimes \mathcal{B} \) of functions of the form \( \mathop{\sum }\limits_{{j = 1}}^{m}{\varphi }_{j}{u}_{j} \), where \( {u}_{j} \in \mathcal{B},{\varphi }_{j} \) are in \( {\mathcal{C}}_{0}^{\infty }\left( {\mathbf{R}}^{n}\right) \), is dense in \( {L}^{p}\left( {{\mathbf{R}}^{n},\mathcal{B}}\right) \) for \( 1 \leq p < \infty \) .
Proof. If \( F \in {L}^{p}\left( {X,\mathcal{B}}\right) \) for \( 0 < p \leq \infty \), then \( F \) is \( \mathcal{B} \) -measurable; thus there exists \( {X}_{0} \subseteq X \) satisfying \( \mu \left( {X \smallsetminus {X}_{0}}\right) = 0 \) and \( F\left\lbrack {X}_{0}\right\rbrack \subseteq {\mathcal{B}}_{0} \), where \( {\mathcal{B}}_{0} \) is some separable subspace of \( \mathcal{B} \) . Choose a countable dense sequence \( {\left\{ {u}_{j}\right\} }_{j = 1}^{\infty } \) of \( {\mathcal{B}}_{0} \) .
(a) First assume that \( p < \infty \) . Since \( X \) is \( \sigma \) -finite, for any \( \varepsilon > 0 \), there exists a measurable subset \( {X}_{1} \) of \( {X}_{0} \) with \( \mu \left( {X}_{1}\right) < \infty \) such that
\[
{\int }_{X \smallsetminus {X}_{1}}\parallel F\left( x\right) {\parallel }_{\mathcal{B}}^{p}{d\mu } < \frac{{\varepsilon }^{p}}{3}.
\]
Setting
\[
\widetilde{B}\left( {{u}_{j},\varepsilon }\right) = \left\{ {u \in {\mathcal{B}}_{0} : {\begin{Vmatrix}u - {u}_{j}\end{Vmatrix}}_{\mathcal{B}} < \varepsilon {\left( 3\mu \left( {X}_{1}\right) \right) }^{-\frac{1}{p}}}\right\} ,
\]
we have \( {\mathcal{B}}_{0} \subseteq \mathop{\bigcup }\limits_{{j = 1}}^{\infty }\widetilde{B}\left( {{u}_{j},\varepsilon }\right) \) . Let \( {A}_{1} = \widetilde{B}\left( {{u}_{1},\varepsilon }\right) \) and \( {A}_{j} = \widetilde{B}\left( {{u}_{j},\varepsilon }\right) \smallsetminus \left( {\mathop{\bigcup }\limits_{{i = 1}}^{{j - 1}}\widetilde{B}\left( {{u}_{i},\varepsilon }\right) }\right) \) for \( j \geq 2 \) . It is easily seen that \( {\left\{ {A}_{j}\right\} }_{j = 1}^{\infty } \) are pairwise disjoint and \( \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{A}_{j} = \) \( \mathop{\bigcup }\limits_{{j = 1}}^{\infty }\widetilde{B}\left( {{u}_{j},\varepsilon }\right) \) . Set \( {E}_{j} = {F}^{-1}\left\lbrack {A}_{j}\right\rbrack \cap {X}_{1} \) . Then \( {X}_{1} = \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{E}_{j} \) and \( {\left\{ {E}_{j}\right\} }_{j = 1}^{\infty } \) are pairwise disjoint. Since \( \mu \left( {X}_{1}\right) = \mathop{\sum }\limits_{{j = 1}}^{\infty }\mu \left( {E}_{j}\right) < \infty \), it follows that \( \mu \left( {E}_{j}\right) < \infty \) and also that for some \( m \in {\mathbf{Z}}^{ + } \) ,
\[
{\int }_{\mathop{\bigcup }\limits_{{j = m + 1}}^{\infty }{E}_{j}}\parallel F\left( x\right) {\parallel }_{\mathcal{B}}^{p}{d\mu } < \frac{{\varepsilon }^{p}}{3}.
\]
(5.5.20)
Moreover, one can easily verify that \( \mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}{u}_{j} \) is \( \mathcal{B} \) -measurable. Notice that \( {\begin{Vmatrix}F\left( x\right) - {u}_{j}\end{Vmatrix}}_{\mathcal{B}} < \varepsilon {\left( 3\mu \left( {X}_{1}\right) \right) }^{-1/p} \) for any \( x \in {E}_{j} \) and \( j \in \{ 1,\ldots, m\} \) . This fact combined with (5.5.20) and the mutual disjointness of \( {\left\{ {E}_{j}\right\} }_{j = 1}^{m} \) yields that
\[
{\int }_{X}{\begin{Vmatrix}F\left( x\right) - \mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}\left( x\right) {u}_{j}\end{Vmatrix}}_{\mathcal{B}}^{p}{d\mu } = {\int }_{X \smallsetminus {X}_{1}}\parallel F\left( x\right) {\parallel }_{\mathcal{B}}^{p}{d\mu } + {\int }_{{ \cup }_{j = m + 1}^{\infty }{E}_{j}}\parallel F\left( x\right) {\parallel }_{\mathcal{B}}^{p}{d\mu }
\]
\[
+ {\int }_{\mathop{\bigcup }\limits_{{j = 1}}^{m}{E}_{j}}{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}\left( x\right) \left\lbrack F\left( x\right) - {u}_{j}\right\rbrack \end{Vmatrix}}_{\mathcal{B}}^{p}{d\mu }
\]
\[
< \frac{{\varepsilon }^{p}}{3} + \frac{{\varepsilon }^{p}}{3} + \frac{{\varepsilon }^{p}}{3} = {\varepsilon }^{p}.
\]
(b) Now consider the case \( p = \infty \) . Obviously we have \( {\mathcal{B}}_{0} \subseteq \mathop{\bigcup }\limits_{{j = 1}}^{\infty }B\left( {{u}_{j},\varepsilon }\right) \), where \( B\left( {{u}_{j},\varepsilon }\right) = \left\{ {u \in {\mathcal{B}}_{0} : {\begin{Vmatrix}u - {u}_{j}\end{Vmatrix}}_{\mathcal{B}} < \varepsilon }\right\} \) . Let \( {A}_{1} = B\left( {{u}_{1},\varepsilon }\right) \) and for \( j \geq 2 \) define sets \( {A}_{j} = B\left( {{u}_{j},\varepsilon }\right) \smallsetminus \left( {\mathop{\bigcup }\limits_{{i = 1}}^{{j - 1}}B\left( {{u}_{i},\varepsilon }\right) }\right) \) . Let \( {E}_{j} = {F}^{-1}\left\lbrack {A}_{j}\right\rbrack \) for \( j \geq 1 \) and \( {E}_{0} = X \smallsetminus \left( {\mathop{\bigcup }\limits_{{j = 1}}^{\infty }{E}_{j}}\right) \) . Then \( \mu \left( {E}_{0}\right) = 0 \) . As in the proof of the case \( p < \infty \), we have that \( {\left\{ {E}_{j}\right\} }_{j = 0}^{\infty } \) are pairwise disjoint and \( {X}_{0} \subseteq \mathop{\bigcup }\limits_{{j = 0}}^{\infty }{E}_{j} \) . Pick \( {u}_{0} = 0 \) . Notice that \( \mathop{\sum }\limits_{{j = 0}}^{\infty }{\chi }_{{E}_{j}}{u}_{j} \) is \( \mathcal{B} \) - measurable. Since \( {\begin{Vmatrix}F\left( x\right) - {u}_{j}\end{Vmatrix}}_{\mathcal{B}} < \varepsilon \) for any \( x \in {E}_{j} \) and \( j \geq 0 \), we have
\[
{\begin{Vmatrix}F - \mathop{\sum }\limits_{{j = 0}}^{\infty }{\chi }_{{E}_{j}}{u}_{j}\end{Vmatrix}}_{{L}^{\infty }\left( {X,\mathcal{B}}\right) } = {\begin{Vmatrix}\mathop{\sum }\limits_{{j = 0}}^{\infty }{\chi }_{{E}_{j}}\left( F - {u}_{j}\right) \end{Vmatrix}}_{{L}^{\infty }\left( {X,\mathcal{B}}\right) } < \varepsilon ,
\]
which completes the proof in the case \( p = \infty \) .
(c) For the last assertion, we fix a smooth function with supported in the unit ball of \( {\mathbf{R}}^{n} \) with integral one. Let \( {\varphi }_{\delta }\left( x\right) = {\delta }^{-n}\varphi \left( {x/\delta }\right) \) for \( x \in {\mathbf{R}}^{n} \) and \( \delta > 0 \) . Given a function \( \mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}{u}_{j} \) as in part (a) approximating a given \( f \) in \( {L}^{p} \otimes \mathcal{B} \), we consider the function \( \mathop{\sum }\limits_{{j = 1}}^{m}\left( {{\chi }_{{E}_{j}} * {\varphi }_{\delta }}\right) {u}_{j} \) which lies in \( {\mathcal{C}}_{0}^{\infty } \otimes \mathcal{B} \) . Since \( {\begin{Vmatrix}{\chi }_{{E}_{j}} * {\varphi }_{\delta } - {\chi }_{{E}_{j}}\end{Vmatrix}}_{{L}^{p}} \rightarrow 0 \) as \( \delta \rightarrow 0 \) when \( 1 \leq p < \infty ,\mathop{\sum }\limits_{{j = 1}}^{m}\left( {{\chi }_{{E}_{j}} * {\varphi }_{\delta }}\right) {u}_{j} \) tends to \( \mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}{u}_{j} \) in \( {L}^{p} \otimes \mathcal{B} \) as \( \delta \rightarrow 0 \), and the conclusion follows.
Let \( \left( {X,\mu }\right) \) be a measure space. If \( F \) is an element of \( {L}^{1}\left( X\right) \otimes \mathcal{B} \) given as in (5.5.17), we define its integral (which is an element of \( \mathcal{B} \) ) by setting
\[
{\int }_{X}F\left( x\right) {d\mu }\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{m}\left( {{\int }_{X}{f}_{j}\left( x\right) {d\mu }\left( x\right) }\right) {u}_{j}.
\]
Observe that for every \( F \in {L}^{1}\left( X\right) \otimes \mathcal{B} \) we have
\[
{\begin{Vmatrix}{\int }_{X}F\left( x\right) d\mu \left( x\right) \end{Vmatrix}}_{\mathcal{B}} = \mathop{\sup }\limits_{{{\begin{Vmatrix}{u}^{ * }\end{Vmatrix}}_{{\mathcal{B}}^{ * }} \leq 1}}\left| \left\langle {{u}^{ * },\mathop{\sum }\limits_{{j = 1}}^{m}\left( {{\int }_{X}{f}_{j}{d\mu }}\right) {u}_{j}}\right\rangle \right|
\]
\[
= \mathop{\sup }\limits_{{{\begin{Vmatrix}{u}^{ * }\end{Vmatrix}}_{{\mathcal{B}}^{ * }} \leq 1}}\left| {{\int }_{X}\left\langle {{u}^{ * },\mathop{\sum }\limits_{{j = 1}}^{m}{f}_{j}{u}_{j}}\right\rangle {d\mu }}\right|
\]
\[
\leq {\int }_{X}\mathop{\sup }\limits_{{{\begin{Vmatrix}{u}^{ * }\end{Vmatrix}}_{{\mathcal{B}}^{ * }} \leq 1}}\left| \left\langle {{u}^{ * },\mathop{\sum }\limits_{{j = 1}}^{m}{f}_{j}{u}_{j}}\right\rangle \right| {d\mu }
\]
\[
= \parallel F{\parallel }_{{L}^{1}\left( {X,\mathcal{B}}\right) }.
\]
Thus the linear operator
\[
F \mapsto {I}_{F} = {\int }_{X}F\left( x\right) {d\mu }\left( x\right)
\]
is bounded from \( {L}^{1}\left( X\right) \otimes \mathcal{B} \) into \( \mathcal{B} \) . Since every element of \( {L}^{1}\left( {X,\mathcal{B}}\right) \) is a (norm) limit (Proposition 5.5.6 (c)) of a sequence of elements in \( {L}^{1}\left( X\right) \otimes \mathcal{B} \), by continuity, the operator \( F \mapsto {I}_{F} \) has a unique extension on \( {L}^{1}\left( {X,\mathcal{B}}\right) \) that we call the Bochner integral of \( F \) and denote by
\[
{\int }_{X}F\left( x\right) {d\mu }\left( x\right)
\]
\( {L}^{1}\left( {X,\mathcal{B}}\right) \) is called the space of all Bochner integrable functions from \( X \) to \( \mathcal{B} \) . Since the Bochner integral is an extension of \( {I}_{F} \), for each \( F \in {L}^{1}\left( {X,\mathcal{B}}\right) \) we have
\[
{\begin{Vmatrix}{\int }_{X}F\left( x\right) dx\end{Vmatrix}}_{\mathcal{B}} \leq {\int }_{X}\parallel F\left( x\right
|
Proposition 5.5.6. Let \( \mathcal{B} \) be a Banach space and \( \left( {X,\mu }\right) \) a \( \sigma \) -finite measure space. (a) The set \( \left\{ {\mathop{\sum }\limits_{{j = 1}}^{m}{\chi }_{{E}_{j}}{u}_{j} : {u}_{j} \in \mathcal{B},{E}_{j} \subseteq X}\right. \) are pairwise disjoint and \( \left. {\mu \left( {E}_{j}\right) < \infty }\right\} \) is dense in \( {L}^{p}\left( {X,\mathcal{B}}\right) \) whenever \( 0 < p < \infty \) .
(b) The set \( \left\{ {\mathop{\sum }\limits_{{j = 0}}^{\infty }{\chi }_{{E}_{j}}{u}_{j} : {u}_{j} \in \mathcal{B},{E}_{j} \subseteq X\text{are pairwise disjoint and}X = { \cup }_{j = 0}^{\infty }{E}_{j}}\right\} \) is dense in \( {L}^{\infty }\left( {X,\mathcal{B}}\right) \) .
(c) The space \( {\mathcal{C}}_{0}^{\infty } \otimes \mathcal{B} \) of functions of the form \( \mathop{\sum }\limits_{{j = 1}}^{m}{\varphi }_{j}{u}_{j} \), where \( {u}_{j} \in \mathcal{B},{\varphi }_{j} \) are in \( {\mathcal{C}}_{0}^{\infty }\left( {\mathbf{R}}^{n}\right) \), is dense in \( {L}^{p}\left( {{\mathbf{R}}^{n},\mathcal{B}}\right) \) for \( 1 \leq p < \infty \) .
|
Proof. If \( F \in {L}^{p}\left( {X,\mathcal{B}}\right) \) for \( 0 < p \leq \infty \), then \( F \) is \( \mathcal{B} \) -measurable; thus there exists \( {X}_{0} \subseteq X \) satisfying \( \mu \left( {X \smallsetminus {X}_{0}}\right) = 0 \) and \( F\left\lbrack {X}_{0}\right\rbrack \subseteq {\mathcal{B}}_{0} \), where \( {\mathcal{B}}_{0} \) is some separable subspace of \( \mathcal{B} \) . Choose a countable dense sequence \( {\left\{ {u}_{j}\right\} }_{j = 1}^{\infty } \) of \( {\mathcal{B}}_{0} \) .
(a) First assume that \( p < \infty \) . Since \( X \) is \( \sigma \) -finite, for any \( \varepsilon > 0 \), there exists a measurable subset \( {X}_{1} \) of \( {X}_{0} \) with \( \mu \left( {X}_{1}\right) < \infty \) such that
\[
{\int }_{X \smallsetminus {X}_{1}}\parallel F\left( x\right) {\parallel }_{\mathcal{B}}^{p}{d\mu } < \frac{{\varepsilon }^{p}}{3}.
\]
Setting
\[
\widetilde{B}\left( {{u}_{j},\varepsilon }\right) = \left\{ {u
|
Example 7.1 The vibrations of a string are modeled by the so-called wave equation
\[
\frac{{\partial }^{2}w}{\partial {x}^{2}} = \frac{1}{{c}^{2}}\frac{{\partial }^{2}w}{\partial {t}^{2}}
\]
where \( w = w\left( {x, t}\right) \) denotes the vertical elongation and \( c \) is the speed of sound in the string. Assuming that the string is clamped at \( x = 0 \) and \( x = 1 \), the boundary conditions \( w\left( {0, t}\right) = w\left( {1, t}\right) = 0 \) must be satisfied for all times \( t \) . Obviously, the time-harmonic wave
\[
w\left( {x, t}\right) = v\left( x\right) {e}^{i\omega t}
\]
with frequency \( \omega \) solves the wave equation, provided that the space-dependent part \( v \) satisfies
\[
- {v}^{\prime \prime } = {\lambda v}\;\text{ on }\left\lbrack {0,1}\right\rbrack
\]
where \( \lambda \mathrel{\text{:=}} {\omega }^{2}/{c}^{2} \) . The boundary conditions \( w\left( {0, t}\right) = w\left( {1, t}\right) = 0 \) are satisfied if \( v \) satisfies the boundary conditions
\[
v\left( 0\right) = v\left( 1\right) = 0.
\]
Hence, introducing the linear space
\( U \mathrel{\text{:=}} \{ v \in C\left\lbrack {0,1}\right\rbrack : v \) is twice continuously differentiable, \( v\left( 0\right) = v\left( 1\right) = 0\} \)
and defining the differential operator \( D : U \rightarrow C\left\lbrack {0,1}\right\rbrack \) by \( D : v \mapsto - {v}^{\prime \prime } \) , we are led to the eigenvalue problem \( {Dv} = {\lambda v} \) . Elementary calculations show that the functions \( {v}_{m}\left( x\right) = \sin {m\pi x} \) are eigenfunctions of \( D \) with the eigenvalues \( {\lambda }_{m} = {m}^{2}{\pi }^{2} \) for \( m = 1,2,\ldots \) . It can be shown that these are the only eigenvalues and eigenfunctions of \( D \) .
For discussing an approximate solution we consider the slightly more general differential equation
\[
- {v}^{\prime \prime } + {pv} = {\lambda v}\;\text{ on }\left\lbrack {0,1}\right\rbrack
\]
with boundary conditions \( v\left( 0\right) = v\left( 1\right) = 0 \), where \( p \in C\left\lbrack {0,1}\right\rbrack \) is a given positive function. We can proceed as in Example 2.1 and choose an equidistant mesh \( {x}_{j} = {jh}, j = 0,\ldots, n + 1 \), with step size \( h = 1/\left( {n + 1}\right) \) and \( n \in \mathbb{N} \) . At the internal grid points \( {x}_{j}, j = 1,\ldots, n \), we replace the differential quotient by the difference quotient
\[
{v}^{\prime \prime }\left( {x}_{j}\right) \approx \frac{1}{{h}^{2}}\left\{ {v\left( {x}_{j + 1}\right) - {2v}\left( {x}_{j}\right) + v\left( {x}_{j - 1}\right) }\right\}
\]
to obtain the system of equations
\[
\frac{1}{{h}^{2}}\left\{ {-{v}_{j - 1} + 2{v}_{j} - {v}_{j + 1}}\right\} + {p}_{j}{v}_{j} = \lambda {v}_{j},\;j = 1,\ldots, n,
\]
for approximate values \( {v}_{j} \) to the exact solution \( v\left( {x}_{j}\right) \) . Here, we have set \( {p}_{j} \mathrel{\text{:=}} p\left( {x}_{j}\right) \) for \( j = 0,\ldots, n + 1 \) . This system has to be complemented by the two boundary conditions \( {v}_{0} = {v}_{n + 1} = 0 \) . For an abbreviated notation we introduce the \( n \times n \) tridiagonal matrix
\[
A = \frac{1}{{h}^{2}}\left( \begin{matrix} 2 + {h}^{2}{p}_{1} & - 1 & & & & \\ - 1 & 2 + {h}^{2}{p}_{2} & - 1 & & & \\ & - 1 & 2 + {h}^{2}{p}_{3} & - 1 & & \\ & \cdot & \cdot & \cdot & & \\ & & & - 1 & 2 + {h}^{2}{p}_{n - 1} & - 1 \\ & & & & - 1 & 2 + {h}^{2}{p}_{n} \end{matrix}\right)
\]
and the vector \( u = {\left( {v}_{1},\ldots ,{v}_{n}\right) }^{T} \) . Then the above system of equations, including the boundary conditions, reads
\[
{Au} = {\lambda u}
\]
i.e., the eigenvalue problem for the differential operator \( D \) is approximated by the eigenvalue problem for the matrix \( A \) .
The important question as to how well the matrix eigenvalues approximate the eigenvalues of the differential operator and whether we have convergence of the eigenvalues as \( h \rightarrow 0 \) is beyond the scope of this book (see Problem 7.2). The example is meant only as an illustration of the fact that eigenvalue problems for large matrices arise through the discretiza-tion of eigenvalue problems for ordinary differential operators and also for partial differential operators. In the same spirit, eigenvalue problems for integral operators can be approximated by matrix eigenvalue problems, as indicated in the following example.
Example 7.2 Consider the eigenvalue problem
\[
{\int }_{0}^{1}K\left( {x, y}\right) \varphi \left( y\right) {dy} = {\lambda \varphi }\left( x\right) ,\;x \in \left\lbrack {0,1}\right\rbrack
\]
for a linear integral operator with continuous kernel \( K \) . For the numerical approximation we proceed as in Example 2.3 and approximate the integral by the rectangular rule with equidistant quadrature points \( {x}_{k} = k/n \) for \( k = 1,\ldots, n \) . If we require the approximated equation to be satisfied only at the grid points, we arrive at the approximating system of equations
\[
\frac{1}{n}\mathop{\sum }\limits_{{k = 1}}^{n}K\left( {{x}_{j},{x}_{k}}\right) {\varphi }_{k} = \lambda {\varphi }_{j},\;j = 1,\ldots, n
\]
for approximate values \( {\varphi }_{j} \) to the exact solution \( \varphi \left( {x}_{j}\right) \) . Hence, we approximate the eigenvalues of the integral operator by the eigenvalues of the matrix with entries \( K\left( {{x}_{j},{x}_{k}}\right) /n \) . Of course, instead of the rectangular rule any other quadrature rule can be used. A discussion of the convergence of the matrix eigenvalues to the eigenvalues of the integral operator is again beyond the aim of this introduction.
## 7.2 Estimates for the Eigenvalues
At this point we urge the reader to recall the basic facts about eigenvalues of matrices, in particular those that were presented in Section 3.4. In the sequel, by \( \left( {\cdot , \cdot }\right) \) we denote the Euclidean scalar product in \( {\mathbb{C}}^{n} \) and by \( \parallel \cdot {\parallel }_{2} \) the corresponding Euclidean norm.
The eigenvalues of Hermitian matrices can be characterized by the following maximum principles. These can be used to get some rough estimates for the eigenvalues. Note that for the eigenvalues of Hermitian matrices the geometric and the algebraic multiplicity coincide (see Problem 7.4).
Theorem 7.3 (Rayleigh) Let \( A \) be a Hermitian \( n \times n \) matrix with eigenvalues
\[
{\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n}
\]
(where multiple eigenvalues occur according to their multiplicity) and corresponding orthonormal eigenvectors \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \) . Then
\[
{\lambda }_{j} = \mathop{\max }\limits_{\substack{{x \in {V}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) },\;j = 1,\ldots, n
\]
where the subspaces \( {V}_{1},\ldots ,{V}_{n} \) are defined by \( {V}_{1} \mathrel{\text{:=}} {\mathbb{C}}^{n} \) and
\[
{V}_{j} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{C}}^{n} : \left( {x,{x}_{k}}\right) = 0, k = 1,\ldots, j - 1}\right\} ,\;j = 2,\ldots, n.
\]
Proof. Let \( x \in {V}_{j} \) with \( x \neq 0 \) . Then
\[
x = \mathop{\sum }\limits_{{k = j}}^{n}\left( {x,{x}_{k}}\right) {x}_{k}\;\text{ and }\;\mathop{\sum }\limits_{{k = j}}^{n}{\left| \left( x,{x}_{k}\right) \right| }^{2} = \left( {x, x}\right) .
\]
Hence
\[
{Ax} = \mathop{\sum }\limits_{{k = j}}^{n}{\lambda }_{k}\left( {x,{x}_{k}}\right) {x}_{k}
\]
and
\[
\left( {{Ax}, x}\right) = \mathop{\sum }\limits_{{k = j}}^{n}{\lambda }_{k}{\left| \left( x,{x}_{k}\right) \right| }^{2} \leq {\lambda }_{j}\mathop{\sum }\limits_{{k = j}}^{n}{\left| \left( x,{x}_{k}\right) \right| }^{2} = {\lambda }_{j}\left( {x, x}\right) .
\]
This implies
\[
\mathop{\sup }\limits_{\substack{{x \in {V}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } \leq {\lambda }_{j}
\]
and the statement follows from \( \left( {A{x}_{j},{x}_{j}}\right) = {\lambda }_{j} \) and \( {x}_{j} \in {V}_{j} \) .
This maximum principle can be used in a simple manner to obtain lower bounds for the largest eigenvalue of Hermitian matrices. For the matrix
\[
A = \left( \begin{array}{lll} 1 & 3 & 2 \\ 3 & 5 & 1 \\ 2 & 1 & 4 \end{array}\right)
\]
by using \( x = {\left( 1,1,1\right) }^{T} \) we find the estimate \( {\lambda }_{1} \geq {7.33} \) as compared to the exact eigenvalue \( {\lambda }_{1} = {7.58}\ldots \) Using \( x = {\left( 1,2,1\right) }^{T} \) leads to the estimate \( {\lambda }_{1} \geq {7.50} \) .
Using Rayleigh's principle to obtain bounds for the smaller eigenvalues requires the knowledge of the eigenvectors for the preceding larger eigenvalues. This problem is circumvented in the following minimum maximum principle.
Theorem 7.4 (Courant) Let \( A \) be a Hermitian \( n \times n \) matrix with eigenvalues
\[
{\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n}
\]
(where multiple eigenvalues occur according to their multiplicity). Then
\[
{\lambda }_{j} = \mathop{\min }\limits_{{{U}_{j} \in {M}_{j}}}\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) },\;j = 1,\ldots, n
\]
where \( {M}_{j} \) denotes the set of all subspaces \( {U}_{j} \subset {\mathbb{C}}^{n} \) of dimension \( n + 1 - j \) .
Proof. First we note that because of
\[
\mathop{\sup }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } = \mathop{\sup }\limits_{\substack{{x \in {U}_{j}} \\ {\left( {x, x}\right) = 1} }}\left( {{Ax}, x}\right)
\]
and the continuity of the function \( x \mapsto \left( {{Ax}, x}\right) \), the supremum is attained; i.e., the maximum exists.
By \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \) we denote orthonormal eigenvectors corresponding to the eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \) . First, we show that for a given subspace \( {U}_{j} \) of dimension \( n + 1 - j \) there exists a vector \( x \in {U}_{j} \) such that
\[
\left( {x,{x}_{k}}\right) = 0,\;k = j + 1,
|
Example 7.1 The vibrations of a string are modeled by the so-called wave equation
\[
\frac{{\partial }^{2}w}{\partial {x}^{2}} = \frac{1}{{c}^{2}}\frac{{\partial }^{2}w}{\partial {t}^{2}}
\]
where \( w = w\left( {x, t}\right) \) denotes the vertical elongation and \( c \) is the speed of sound in the string. Assuming that the string is clamped at \( x = 0 \) and \( x = 1 \), the boundary conditions \( w\left( {0, t}\right) = w\left( {1, t}\right) = 0 \) must be satisfied for all times \( t \) . Obviously, the time-harmonic wave
\[
w\left( {x, t}\right) = v\left( x\right) {e}^{i\omega t}
\]
with frequency \( \omega \) solves the wave equation, provided that the space-dependent part \( v \) satisfies
\[
- {v}^{\prime \prime } = {\lambda v}\;\text{ on }\left\lbrack {0,1}\right\rbrack
\]
where \( \lambda \mathrel{\text{:=}} {\omega }^{2}/{c}^{2} \) . The boundary conditions \( w\left( {0, t}\right) = w\left( {1, t}\right) = 0 \) are satisfied if \( v \) satisfies the boundary conditions
\[
v\left( 0\right) = v\left( 1\right) = 0.
\]
Hence, introducing the linear space
\( U \mathrel{\text{:=}} \{ v \in C\left\lbrack {0,1}\right\rbrack : v \) is twice continuously differentiable, \( v\left( 0\right) = v\left( 1\right) = 0\} \)
and defining the differential operator \( D : U \rightarrow C\left\lbrack {0,1}\right\rbrack \) by \( D : v \mapsto - {v}^{\prime \prime } \) , we are led to the eigenvalue problem \( {Dv} = {\lambda v} \) . Elementary calculations show that the functions \( {v}_{m}\left( x\right) = \sin {m\pi x} \) are eigenfunctions of \( D \) with the eigenvalues \( {\lambda }_{m} = {m}^{2}{\pi }^{2} \) for \( m = 1,2,\ldots \) . It can be shown that these are the only eigenvalues and eigenfunctions of \( D \) .
|
To prove that the functions \( {v}_{m}\left( x\right) = \sin {m\pi x} \) are eigenfunctions of the differential operator \( D \) with eigenvalues \( {\lambda }_{m} = {m}^{2}{\pi }^{2} \), we start by substituting \( {v}_{m}\left( x\right) = \sin {m\pi x} \) into the eigenvalue problem equation:
\[ -{v}^{\prime \prime } = {\lambda v}. \]
First, we compute the second derivative of \( {v}_{m}(x) \):
\[ v_{m}(x) = \sin(m \pi x), \]
\[ v_{m}'(x) = m \pi \cos(m \pi x), \]
\[ v_{m}^{\prime \prime}(x) = - (m \pi)^2 \sin(m \pi x). \]
Thus, we have:
\[ -v_{m}^{\prime \prime}(x) = (m \pi)^2 \sin(m \pi x). \]
|
Exercise 7.4.11 Show that if \( q \) is prime, then
\[
\frac{\varphi \left( {q - 1}\right) }{q - 1}\mathop{\sum }\limits_{{d \mid q - 1}}\frac{\mu \left( d\right) }{\varphi \left( d\right) }\mathop{\sum }\limits_{{o\left( \chi \right) = d}}\chi \left( a\right) = \left\{ \begin{array}{ll} 1 & \text{ if }a\text{ has order }q - 1 \\ 0 & \text{ otherwise,} \end{array}\right.
\]
where the inner sum is over characters \( \chi {\;\operatorname{mod}\;q} \) whose order is \( d \) .
Exercise 7.4.12 Let \( q \) be prime and assume the generalized Riemann hypothesis. For \( q \) sufficiently large, show that there is always a prime \( p < q \) such that \( p \) is a primitive root \( \left( {\;\operatorname{mod}\;q}\right) \) ,
Exercise 7.4.13 Let \( q \) be a prime. Show that the smallest primitive root \( {\;\operatorname{mod}\;q} \) is \( O\left( {{2}^{\nu \left( {q - 1}\right) }{q}^{1/2}\log q}\right) \), where \( \nu \left( {q - 1}\right) \) is the number of distinct prime factors of \( q - 1 \) .
Exercise 7.4.14 Let \( q \) be a prime and assume the generalized Riemann hypothesis. Show that there is always a prime-power primitive root satisfying the bound \( O\left( {{4}^{\nu \left( {q - 1}\right) }{\log }^{4}q}\right) \) .
Exercise 7.4.15 Let \( q \) be prime and assume the generalized Riemann hypothesis. Show that the least quadratic nonresidue \( \left( {\;\operatorname{mod}\;q}\right) \) is \( O\left( {{\log }^{4}q}\right) \) .
Exercise 7.4.16 Let \( q \) be prime and assume the generalized Riemann hypothesis. Show that the least prime quadratic residue \( \left( {\;\operatorname{mod}\;q}\right) \) is \( O\left( {{\log }^{4}q}\right) \) .
Exercise 7.4.17 Prove that for \( n > 1 \) ,
\[
\mathop{\lim }\limits_{{T \rightarrow \infty }}\frac{1}{T}\mathop{\sum }\limits_{{\left| \gamma \right| \leq T}}{n}^{\rho } = - \frac{\Lambda \left( n\right) }{\pi }
\]
where the summation is over zeros \( \rho = \beta + {i\gamma },\beta \in \mathbb{R} \), of the Riemann zeta function.
## The Selberg Class
The Selberg class \( \mathcal{S} \) consists of functions \( F\left( s\right) \) of a complex variable \( s \) satisfying the following properties:
1. (Dirichlet series): For \( \operatorname{Re}\left( s\right) > 1 \) ,
\[
F\left( s\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{a}_{n}}{{n}^{s}}
\]
where \( {a}_{1} = 1 \) . (We will write \( {a}_{n}\left( F\right) = {a}_{n} \) for the coefficients of the Dirichlet series of \( F \) .)
2. (Analytic continuation): For some integer \( m \geq 0,{\left( s - 1\right) }^{m}F\left( s\right) \) extends to an entire function of finite order.
3. (Functional equation): There are numbers \( Q > 0,{\alpha }_{i} > 0,{r}_{i} \in \) \( \mathbb{C} \) with \( \operatorname{Re}\left( {r}_{i}\right) \geq 0 \) such that
\[
\Phi \left( s\right) = {Q}^{s}\mathop{\prod }\limits_{{i = 1}}^{d}\Gamma \left( {{\alpha }_{i}s + {r}_{i}}\right) F\left( s\right)
\]
satisfies the functional equation
\[
\Phi \left( s\right) = w\bar{\Phi }\left( {1 - s}\right)
\]
where \( w \) is a complex number with \( \left| w\right| = 1 \) and \( \bar{\Phi }\left( s\right) = \overline{\Phi \left( \bar{s}\right) } \) .
4. (Euler product): For \( \operatorname{Re}\left( s\right) > 1 \) ,
\[
F\left( s\right) = \mathop{\prod }\limits_{p}{F}_{p}\left( s\right)
\]
where
\[
{F}_{p}\left( s\right) = \exp \left( {\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{{b}_{{p}^{k}}}{{p}^{ks}}}\right)
\]
and \( {b}_{{p}^{k}} = O\left( {p}^{k\theta }\right) \) for some \( \theta < 1/2 \), and \( p \) denotes a prime number here. We shall write \( {b}_{p}\left( F\right) = {b}_{p} \) .
5. (Ramanujan hypothesis): For any fixed \( \epsilon > 0 \) ,
\[
{a}_{n} = O\left( {n}^{\epsilon }\right)
\]
where the implied constant may depend upon \( \epsilon \) .
A prototypical example of an element of \( \mathcal{S} \) is, of course, the Riemann zeta function. But more exemplary is the Ramanujan zeta function
\[
{L}_{\Delta }\left( s\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{\tau }_{n}}{{n}^{s}}
\]
where \( {\tau }_{n} = \tau \left( n\right) /{n}^{{11}/2} \) and \( \tau \) is defined by the infinite product
\[
\mathop{\sum }\limits_{{n = 1}}^{\infty }\tau \left( n\right) {q}^{n} = q\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - {q}^{n}\right) }^{24}.
\]
Ramanujan established properties (1), (2), and (3) and conjectured (4) and (5). Property (4) was proved by Mordell and (5) by Deligne.
## 8.1 The Phragmén - Lindelöf Theorem
We discuss an important theorem that allows us to estimate the growth of a function in the region \( a \leq \operatorname{Re}\left( s\right) \leq b \) from its behaviour on \( \operatorname{Re}\left( s\right) = a \) and \( \operatorname{Re}\left( s\right) = b \) . We first recall the maximum modulus principle.
Exercise 8.1.1 Let \( f\left( z\right) \) be an analytic function, regular in a region \( R \) and on the boundary \( \partial R \), which we assume to be a simple closed contour. If \( \left| {f\left( z\right) }\right| \leq M \) on \( \partial R \), show that \( \left| {f\left( z\right) }\right| \leq M \) for all \( z \in R \) .
Exercise 8.1.2 (The maximum modulus principle) If \( f \) is as in the previous exercise, show that \( \left| {f\left( z\right) }\right| < M \) for all interior points \( z \in R \) , unless \( f \) is constant.
Theorem 8.1.3 (Phragmén - Lindelöf) Suppose that \( f\left( s\right) \) is entire in the region
\[
S\left( {a, b}\right) = \{ s \in \mathbb{C} : a \leq \operatorname{Re}\left( s\right) \leq b\}
\]
and that as \( \left| t\right| \rightarrow \infty \) ,
\[
\left| {f\left( s\right) }\right| = O\left( {e}^{{\left| t\right| }^{\alpha }}\right)
\]
for some \( \alpha \geq 1 \) . If \( f\left( s\right) \) is bounded on the two vertical lines \( \operatorname{Re}\left( s\right) = a \) and \( \operatorname{Re}\left( s\right) = b \), then \( f\left( s\right) \) is bounded in \( S\left( {a, b}\right) \) .
Proof. We first select an integer \( m > \alpha, m \equiv 2\left( {\;\operatorname{mod}\;4}\right) \) . Since arg \( s \rightarrow \) \( \pi /2 \) as \( t \rightarrow \infty \), we can choose \( {T}_{1} \) sufficiently large so that
\[
\left| {\arg s - \pi /2}\right| < \pi /{4m}
\]
Then for \( \left| {\operatorname{Im}\left( s\right) }\right| \geq {T}_{1} \), we find that \( \arg s = \pi /2 - \delta = \theta \) (say) satisfies
\[
\cos {m\theta } = - \cos {m\delta } < - 1/\sqrt{2}.
\]
Therefore, if we consider
\[
{g}_{\epsilon }\left( s\right) = {e}^{\epsilon {s}^{m}}f\left( s\right)
\]
then
\[
\left| {{g}_{\epsilon }\left( s\right) }\right| \leq K{e}^{{\left| t\right| }^{\alpha }}{e}^{-\epsilon {\left| s\right| }^{m}/\sqrt{2}}.
\]
Thus, \( \left| {{g}_{\epsilon }\left( s\right) }\right| \rightarrow 0 \) as \( \left| t\right| \rightarrow \infty \) . Let \( B \) be the maximum of \( f\left( s\right) \) in the region
\[
a \leq \operatorname{Re}\left( s\right) \leq b,\;0 \leq \left| {\operatorname{Im}\left( s\right) }\right| \leq {T}_{1}
\]
Let \( {T}_{2} \) be chosen such that
\[
\left| {{g}_{\epsilon }\left( s\right) }\right| \leq B
\]
for \( \left| {\operatorname{Im}\left( s\right) }\right| \geq {T}_{2} \) . Thus,
\[
\left| {f\left( s\right) }\right| \leq B{e}^{-\epsilon {\left| s\right| }^{m}\cos \left( {m\arg s}\right) } \leq B{e}^{\epsilon {\left| s\right| }^{m}}
\]
for \( \left| {\operatorname{Im}\left( s\right) }\right| \geq {T}_{2} \) . Applying the maximum modulus principle to the region
\[
a \leq \operatorname{Re}\left( s\right) \leq b,\;0 \leq \left| {\operatorname{Im}\left( s\right) }\right| \leq {T}_{2},
\]
we find that \( \left| {f\left( s\right) }\right| \leq B{e}^{\epsilon {\left| s\right| }^{m}} \) . This estimate holds for all \( s \) in \( S\left( {a, b}\right) \) . Letting \( \epsilon \rightarrow 0 \) yields the result.
Corollary 8.1.4 Suppose that \( f\left( s\right) \) is entire in \( S\left( {a, b}\right) \) and that \( \left| {f\left( s\right) }\right| \) \( = O\left( {e}^{{\left| t\right| }^{\alpha }}\right) \) for some \( \alpha \geq 1 \) as \( \left| t\right| \rightarrow \infty \) . If \( f\left( s\right) \) is \( O\left( {\left| t\right| }^{A}\right) \) on the two vertical lines \( \operatorname{Re}\left( s\right) = a \) and \( \operatorname{Re}\left( s\right) = b \), then \( f\left( s\right) = O\left( {\left| t\right| }^{A}\right) \) in \( S\left( {a, b}\right) \) .
Proof. We apply the theorem to the function \( g\left( s\right) = f\left( s\right) /{\left( s - u\right) }^{A} \) , where \( u > b \) . Then \( g \) is bounded on the two vertical strips, and the result follows.
Exercise 8.1.5 Show that for any entire function \( F \in \mathcal{S} \), we have
\[
F\left( s\right) = O\left( {\left| t\right| }^{A}\right)
\]
for some \( A > 0 \), in the region \( 0 \leq \operatorname{Re}\left( s\right) \leq 1 \) .
## 8.2 Basic Properties
We begin by stating the following theorem of Selberg:
Theorem 8.2.1 (Selberg) For any \( F \in \mathcal{S} \), let \( {N}_{F}\left( T\right) \) be the number of zeros \( \rho \) of \( F\left( s\right) \) satisfying \( 0 \leq \operatorname{Im}\left( \rho \right) \leq T \), counted with multiplicity. Then
\[
{N}_{F}\left( T\right) \sim \left( {2\mathop{\sum }\limits_{{i = 1}}^{d}{\alpha }_{i}}\right) \frac{T\log T}{2\pi }
\]
as \( T \rightarrow \infty \) .
Proof. This is easily derived by the method used to count zeros of \( \zeta \left( s\right) \) and \( L\left( {s,\chi }\right) \) as in Theorem 7.1.7 and Exercise 7.4.4.
Clearly, the functional equation for \( F \in \mathcal{S} \) is not unique, by virtue of Legendre's duplication formula. However, the above theorem shows that the sum of the \( {\alpha }_{i} \) ’s is well-defined. Accordingly, we define the degree of \( F \) by
\[
\deg F \mathrel{\text{:=}} 2\mathop{\sum }\limits_{{i = 1}}^{d}{\alpha }_{i}
\]
Lemma 8.2.2 (Conrey and Ghosh) If \( F \in S \) and \( \deg F = 0 \), then \( F = 1 \) .
Proof.
|
Exercise 7.4.11 Show that if \( q \) is prime, then
\[
\frac{\varphi \left( {q - 1}\right) }{q - 1}\mathop{\sum }\limits_{{d \mid q - 1}}\frac{\mu \left( d\right) }{\varphi \left( d\right) }\mathop{\sum }\limits_{{o\left( \chi \right) = d}}\chi \left( a\right) = \left\{ \begin{array}{ll} 1 & \text{ if }a\text{ has order }q - 1 \\ 0 & \text{ otherwise,} \end{array}\right.
\]
where the inner sum is over characters \( \chi {\;\operatorname{mod}\;q} \) whose order is \( d \) .
|
null
|
Corollary 3. Reduction \( {\;\operatorname{mod}\;.}\mathfrak{m} \) defines an isomorphism from \( {\mathrm{P}}_{\mathrm{A}}\left( \mathrm{G}\right) \) onto \( {\mathrm{P}}_{k}\left( \mathrm{G}\right) \) ; this isomorphism maps \( {\mathrm{P}}_{\mathrm{A}}^{ + }\left( \mathrm{G}\right) \) onto \( {\mathrm{P}}_{k}^{ + }\left( \mathrm{G}\right) \) . identify \( {\mathrm{P}}_{\mathrm{A}}\left( \mathrm{G}\right) \) and \( {\mathrm{P}}_{k}\left( \mathrm{G}\right) \) .
As a result we may identify \( {\mathrm{P}}_{\mathrm{A}}\left( \mathrm{G}\right) \) and \( {\mathrm{P}}_{k}\left( \mathrm{G}\right) \) .
For a general exposition of projective envelopes in "proartinian" categories, see Demazure-Gabriel [23].
## Exercises
14.2. Let \( \Lambda \) be a commutative ring, and let \( \mathrm{P} \) be a \( \Lambda \left\lbrack \mathrm{G}\right\rbrack \) -module which is projective over \( \Lambda \) . Prove the equivalence of the following properties:
(i) \( \mathrm{P} \) is a projective \( \Lambda \left\lbrack \mathrm{G}\right\rbrack \) -module.
(ii) For each maximal ideal \( \mathfrak{p} \) of \( \Lambda \), the \( \left( {\Lambda /\mathfrak{p}}\right) \left\lbrack \mathrm{G}\right\rbrack \) -module \( \mathrm{P}/\mathfrak{p}\mathrm{P} \) is projective.
14.3. (a) Let B be an A-algebra which is free of finite rank over A, and let \( \bar{u} \) be an idempotent of \( \overline{\mathrm{B}} = \mathrm{B}/\mathrm{{mB}} \) . Show the existence of an idempotent of \( \mathrm{B} \) whose reduction \( {\;\operatorname{mod}\;.}m\mathrm{\;B} \) is equal to \( \bar{u} \) .
(b) Let \( \mathrm{P} \) be a projective \( \mathrm{A}\left\lbrack \mathrm{G}\right\rbrack \) -module, and let \( \mathrm{B} = {\operatorname{End}}^{\mathrm{G}}\left( \mathrm{P}\right) \) . Show that \( \mathrm{B} \) is A-free, and that \( \overline{\mathrm{B}} \) can be identified with the algebra of G-endomorphisms of \( \overline{\mathbf{P}} = \mathbf{P}/m\mathbf{P} \) . Deduce from this, and (a), that each decomposition of \( \overline{\mathbf{P}} \) into a direct sum of \( k\left\lbrack \mathrm{G}\right\rbrack \) -modules lifts to a corresponding decomposition of \( \mathbf{P} \) .
(c) Use (b) to give another proof of existence in Prop. 42(b). [Write F as a direct factor of a free module \( \overline{\mathrm{P}} \), lift \( \overline{\mathrm{P}} \) to a free module, and apply (b).]
## 14.5 Dualities
Duality between \( {\mathrm{R}}_{\mathrm{K}}\left( \mathrm{G}\right) \) and \( {\mathrm{R}}_{\mathrm{K}}\left( \mathrm{G}\right) \)
Let \( \mathrm{E} \) and \( \mathrm{F} \) be \( \mathrm{K}\left\lbrack \mathrm{G}\right\rbrack \) -modules, and put
\[
\langle \mathrm{E},\mathrm{F}\rangle = \dim {\operatorname{Hom}}^{\mathrm{G}}\left( {\mathrm{E},\mathrm{F}}\right) ,\;\text{ cf. }{7.1}.
\]
The map \( \left( {\mathrm{E},\mathrm{F}}\right) \mapsto \langle \mathrm{E},\mathrm{F}\rangle \) is "bilinear" (with respect to exact sequences), and so defines a bilinear form
\[
{\mathrm{R}}_{\mathrm{K}}\left( \mathrm{G}\right) \times {\mathrm{R}}_{\mathrm{K}}\left( \mathrm{G}\right) \rightarrow \mathbf{Z},
\]
which we denote by \( \langle e, f\rangle \) or \( \langle e, f{\rangle }_{\mathrm{K}} \) . The classes [E] of simple modules \( \mathrm{E} \in {\mathrm{S}}_{\mathrm{K}} \) are mutually orthogonal, and \( \langle \mathrm{E},\mathrm{E}\rangle \) is equal to the dimension \( {d}_{\mathrm{E}} \) of the field \( {\operatorname{End}}^{\mathrm{G}}\left( \mathrm{E}\right) \) of endomorphisms of \( \mathrm{E} \) ; hence \( {d}_{\mathrm{E}} \geq 1 \), and equality holds if and only if \( \mathrm{E} \) is absolutely simple (i.e., if the corresponding representation is absolutely irreducible), cf. 12.1.
When \( \mathrm{K} \) is sufficiently large, it follows from th. 24 that every simple \( \mathrm{K}\left\lbrack \mathrm{G}\right\rbrack \) -module is absolutely simple. Consequently the above bilinear form is nondegenerate over \( \mathbf{Z} \), in the sense that it defines an isomorphism of \( {\mathrm{R}}_{\mathrm{K}}\left( \mathrm{G}\right) \) onto its dual.
Duality between \( {\mathrm{R}}_{k}\left( \mathrm{G}\right) \) and \( {\mathrm{P}}_{k}\left( \mathrm{G}\right) \)
If \( \mathrm{E} \) is a projective \( k\left\lbrack \mathrm{G}\right\rbrack \) -module and \( \mathrm{F} \) an arbitrary \( k\left\lbrack \mathrm{G}\right\rbrack \) -module, put
\[
\langle \mathrm{E},\mathrm{F}\rangle = \dim {\operatorname{Hom}}^{\mathrm{G}}\left( {\mathrm{E},\mathrm{F}}\right) .
\]
We thus obtain a bilinear function of \( \mathrm{E} \) and \( \mathrm{F} \) (thanks to the assumption that \( \mathrm{E} \) is projective), hence a bilinear form
\[
{\mathrm{P}}_{k}\left( \mathrm{G}\right) \times {\mathrm{R}}_{k}\left( \mathrm{G}\right) \rightarrow \mathbf{Z}
\]
denoted \( \langle e, f\rangle \) or \( \langle e, f{\rangle }_{k} \) . If \( \mathrm{E},{\mathrm{E}}^{\prime } \in {\mathrm{S}}_{k} \), we have
\[
{\operatorname{Hom}}^{\mathrm{G}}\left( {{\mathrm{P}}_{\mathrm{E}},{\mathrm{E}}^{\prime }}\right) = {\operatorname{Hom}}^{\mathrm{G}}\left( {\mathrm{E},{\mathrm{E}}^{\prime }}\right)
\]
where \( {P}_{E} \) denotes the projective envelope of \( E \) . If \( E \neq {E}^{\prime } \) we see that \( \left\lbrack {P}_{E}\right\rbrack \) and \( \left\lbrack {\mathrm{E}}^{\prime }\right\rbrack \) are orthogonal; for \( \mathrm{E} = {\mathrm{E}}^{\prime } \) we have
\[
\left\langle {{\mathrm{P}}_{\mathrm{E}},\mathrm{E}}\right\rangle = \dim {\operatorname{End}}^{\mathrm{G}}\left( \mathrm{E}\right) .
\]
As before, \( {d}_{\mathrm{E}} = 1 \) if and only if \( \mathrm{E} \) is absolutely simple.
Suppose that \( \mathrm{K} \) is sufficiently large, so that \( k \) contains the \( m \) th roots of unity. We then have \( {d}_{\mathrm{E}} = 1 \) for each \( \mathrm{E} \in {\mathrm{S}}_{k} \) (see below). Consequently the bilinear form \( \langle \) , \( {\rangle }_{k} \) is nondegenerate over \( \widetilde{\mathbf{Z}} \) and the bases [E] and \( \left\lbrack {\mathbf{P}}_{\mathbf{E}}\right\rbrack \) \( \left( {\mathrm{E} \in {\mathrm{S}}_{k}}\right) \) are dual to each other with respect to this form.
## Remark
The fact that \( {d}_{\mathrm{E}} = 1 \) if \( \mathrm{K} \) is sufficiently large can be proved in various ways:
(1) We can obtain this from th. 24 by "reduction mod. m" once we know that the homomorphism \( d : {\mathrm{R}}_{\mathrm{K}}\left( \mathrm{G}\right) \rightarrow {\mathrm{R}}_{k}\left( \mathrm{G}\right) \) is surjective (cf. Ch. 16, th. 33).
## Chapter 14: The groups \( {R}_{K}\left( G\right) ,{R}_{k}\left( G\right) \), and \( {P}_{k}\left( G\right) \)
(2) We could also use the fact that Schur indices over \( k \) are equal to 1 (cf. 14.6). This reduces the proof to showing that characters of representations of \( \mathrm{G} \) (over an extension of \( k \) ) always have values in \( k \), and this follows from the fact that they are sums of \( m \) th roots of unity.
## Exercises
14.4. If \( \mathrm{E} \) is a \( k\left\lbrack \mathrm{G}\right\rbrack \) -module, we let \( {\mathrm{E}}^{\prime } \) denote its dual. We define \( {\mathrm{H}}^{0}\left( {\mathrm{G},\mathrm{E}}\right) \) as the subspace of \( \mathrm{E} \) consisting of the elements fixed by \( \mathrm{G} \), and \( {\mathrm{H}}_{0}\left( {\mathrm{G},\mathrm{E}}\right) \) as the quotient of \( \mathrm{E} \) by the subspace generated by the \( {sx} - x \), with \( x \in \mathrm{E} \) and \( s \in \mathbf{G} \) .
(a) Show that, if \( \mathrm{E} \) is projective, the map \( x \mapsto \mathop{\sum }\limits_{{s \in \mathrm{G}}}{sx} \) defines, by passing to quotients, an isomorphism of \( {\mathrm{H}}_{0}\left( {\mathrm{G},\mathrm{E}}\right) \) onto \( {\mathrm{H}}^{0}\left( {\mathrm{G},\mathrm{E}}\right) \) .
(b) Show that \( {\mathrm{H}}^{0}\left( {\mathrm{G},\mathrm{E}}\right) \) is the dual of \( {\mathrm{H}}_{0}\left( {\mathrm{G},{\mathrm{E}}^{\prime }}\right) \) . Conclude that \( {\mathrm{H}}^{0}\left( {\mathrm{G},\mathrm{E}}\right) \) and \( {\mathrm{H}}^{0}\left( {\mathrm{G},{\mathrm{E}}^{\prime }}\right) \) have the same dimension if \( \mathrm{E} \) is projective.
14.5. Let \( \mathrm{E} \) and \( \mathrm{F} \) be two \( k\left\lbrack \mathrm{G}\right\rbrack \) -modules, with \( \mathrm{E} \) projective. Show that
\[
\dim {\operatorname{Hom}}^{\mathrm{G}}\left( {\mathrm{E},\mathrm{F}}\right) = \dim {\operatorname{Hom}}^{\mathrm{G}}\left( {\mathrm{F},\mathrm{E}}\right) .
\]
[Apply part (b) of exercise 14.4 to the projective \( k\left\lbrack \mathrm{G}\right\rbrack \) -module \( \operatorname{Hom}\left( {\mathrm{E},\mathrm{F}}\right) \) , and observe that its dual is isomorphic to \( \operatorname{Hom}\left( {\mathrm{F},\mathrm{E}}\right) \) .]
14.6. Let \( \mathrm{S} \) be a simple \( k\left\lbrack \mathrm{G}\right\rbrack \) -module and let \( {\mathrm{P}}_{\mathrm{S}} \) be its projective envelope. Show that \( {P}_{S} \) contains a submodule isomorphic to \( S \) . [Apply exercise 14.5 with \( \left. {\mathrm{E} = {\mathrm{P}}_{\mathrm{S}},\mathrm{F} = \mathrm{S}\text{.}}\right\rbrack \) Conclude that \( {\mathrm{P}}_{\mathrm{S}} \) is isomorphic to the injective envelope of \( \mathrm{S} \), cf. exercise 14.1. In particular, if \( \mathrm{S} \) is not projective, then \( \mathrm{S} \) appears at least twice in a composition series of \( {\mathrm{P}}_{\mathrm{S}} \) .
14.7. Let \( \mathrm{E} \) be a semisimple \( k\left\lbrack \mathrm{G}\right\rbrack \) -module, and let \( {\mathrm{P}}_{\mathrm{E}} \) be its projective envelope. Show that the projective envelope of the dual of \( \mathrm{E} \) is isomorphic to the dual of \( {\mathrm{P}}_{\mathrm{E}} \) [reduce to the case of a simple module and use exercise 14.6].
## 14.6 Scalar extensions
If \( {\mathrm{K}}^{\prime } \) is an extension of \( \mathrm{K} \), each \( \mathrm{K}\left\lbrack \mathrm{G}\right\rbrack \) -module \( \mathrm{E} \) defines by scalar extension a \( {\mathrm{K}}^{\prime }\left\lbrack \mathrm{G}\right\rbrack \) -mo
|
Corollary 3. Reduction \( {\;\operatorname{mod}\;.}\mathfrak{m} \) defines an isomorphism from \( {\mathrm{P}}_{\mathrm{A}}\left( \mathrm{G}\right) \) onto \( {\mathrm{P}}_{k}\left( \mathrm{G}\right) \) ; this isomorphism maps \( {\mathrm{P}}_{\mathrm{A}}^{ + }\left( \mathrm{G}\right) \) onto \( {\mathrm{P}}_{k}^{ + }\left( \mathrm{G}\right) \) .
|
null
|
Corollary 9.25. For all \( s \in \mathbb{C}, s \notin \mathbb{N} \) ,
\[
\Gamma \left( s\right) \Gamma \left( {1 - s}\right) = \frac{\pi }{\sin \left( {\pi s}\right) }.
\]
Proof. By Theorem 9.20,
\[
\Gamma \left( s\right) \Gamma \left( {-s}\right) = - \frac{1}{{s}^{2}}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 + \frac{s}{n}\right) }^{-1}{e}^{s/n}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - \frac{s}{n}\right) }^{-1}{e}^{-s/n}
\]
\[
= - \frac{1}{{s}^{2}}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - \frac{{s}^{2}}{{n}^{2}}\right) }^{-1} = - \frac{\pi }{s\sin \left( {\pi s}\right) }
\]
using the classical formula
\[
\sin \left( {\pi s}\right) = {\pi s}\mathop{\prod }\limits_{{n = 1}}^{\infty }\left( {1 - \frac{{s}^{2}}{{n}^{2}}}\right)
\]
(9.25)
The corollary follows because \( - {s\Gamma }\left( {-s}\right) = \Gamma \left( {1 - s}\right) \) .
Equation (9.25) is another example of an analog of the Fundamental Theorem of Arithmetic in a function-theory context. We know that \( \sin \left( {\pi s}\right) \) vanishes at each integer, so we might hope to factorize it in the form
\[
\operatorname{cs}\mathop{\prod }\limits_{{n = 1}}^{\infty }\left( {{n}^{2} - {s}^{2}}\right)
\]
Of course, this does not converge, and attempting to get the terms to converge to 1 fast enough to guarantee convergence of the infinite product plausibly leads one to conjecture Equation (9.25).
Exercise 9.6. Prove the identity (9.25).
Exercise 9.7. Justify the steps in the following argument. The Taylor expansion of the sine function gives
\[
\sin \left( {\pi s}\right) = {\pi s} - \frac{{\left( \pi s\right) }^{3}}{6} + \cdots .
\]
(9.26)
By Equation (9.25), this is equal to
\[
{\pi s}\left( {1 - {s}^{2}\left( {\frac{1}{1} + \frac{1}{4} + \frac{1}{9} + \cdots }\right) + \cdots }\right)
\]
\[
= {\pi s} - \pi {s}^{3}\mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{2}} + \cdots
\]
Comparing the coefficient of \( {s}^{3} \) with that of Equation (9.26) gives
\[
\mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{2}} = \frac{{\pi }^{2}}{6}
\]
Exercise 9.8. Prove that \( \zeta \left( {2k}\right) \) is a rational multiple of \( {\pi }^{2k} \) for any \( k \geq 1 \) .
Much less is known about the values \( \zeta \left( 3\right) ,\zeta \left( 5\right) ,\ldots \) Apéry proved in 1978 that \( \zeta \left( 3\right) \notin \mathbb{Q} \), and there are some very deep results on the algebraic independence of various values of \( \zeta \) at odd integers.
Exercise 9.9. This exercise is a more explicit version of the previous one. (a) Replace \( s \) by iz in Equation (9.25) to deduce that
\[
\sinh \left( {\pi z}\right) = {\pi z}\mathop{\prod }\limits_{{n = 1}}^{\infty }\left( {1 + \frac{{z}^{2}}{{n}^{2}}}\right)
\]
(9.27)
(b) Use logarithmic differentiation to prove
\[
\frac{\pi z}{{e}^{\pi z} - 1} + \frac{\pi z}{2} = 1 + \mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{{\left( -1\right) }^{k + 1}}{{2}^{{2k} - 1}}\zeta \left( {2k}\right) {z}^{2k}.
\]
(9.28)
(c) Deduce that
\[
\zeta \left( {2k}\right) = {\left( -1\right) }^{k}{\pi }^{2k}\frac{{2}^{{2k} - 1}}{\left( {{2k} - 1}\right) !}\left( {-\frac{{B}_{2k}}{2k}}\right)
\]
(9.29)
where \( {B}_{n} \) denotes the \( n \) th Bernoulli number defined by
\[
\frac{z}{{e}^{z} - 1} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{B}_{n}{z}^{n}}{n!}
\]
(9.30)
Exercise 9.10. (a) Use Theorem 9.5 and Equation (9.29) to prove that \( \zeta \) takes rational values at negative odd integers.
(b)Use Equation (9.30) to show that \( {B}_{n} = 0 \) for odd integers \( n > 1 \) . (c)Deduce that
\[
\zeta \left( {-n}\right) = - \frac{{B}_{n + 1}}{n + 1}
\]
(9.31)
for all \( n > 0 \) .
The neatness of Equation (9.31) suggests there might be a more elegant way to prove it. Hurwitz found a beautiful proof using complex analysis.
Exercise 9.11. Use the functional equation together with Equations (9.24) and (8.24) to prove that
\[
\frac{{\zeta }^{\prime }\left( 0\right) }{\zeta \left( 0\right) } = \log \left( {2\pi }\right)
\]
(9.32)
Prove that \( \zeta \left( 0\right) = - \frac{1}{2} \) and deduce the value of \( {\zeta }^{\prime }\left( 0\right) \) .
Exercise 9.12. *Prove that \( \mathop{\sum }\limits_{{n = - \infty }}^{\infty }\frac{1}{{\left( 4n + 1\right) }^{k}} \) is a rational multiple of \( {\pi }^{k} \) for any \( k \geq 2 \) .
There are many deep results on the location and distribution of the zeros of the Riemann zeta function, all far beyond our scope.
Theorem 9.26. Define \( N\left( T\right) \) to be the number of zeros of the Riemann zeta function in the critical strip up to height \( T \) ,
\[
N\left( T\right) = \left| {\{ s \in \mathbb{C} : 0 \leq \Re \left( s\right) \leq 1,\zeta \left( s\right) = 0,0 < \Im \left( s\right) < T\} }\right| .
\]
Then there is an asymptotic formula,
\[
N\left( T\right) = \frac{T}{2\pi }\log \left( \frac{T}{2\pi }\right) - \frac{T}{2\pi } + \mathrm{O}\left( {\log T}\right)
\]
The proof makes use of Stirling's Formula extended to the complex plane,
\[
\log \Gamma \left( s\right) = - s + \left( {s - \frac{1}{2}}\right) \log s + \mathrm{O}\left( 1\right)
\]
provided \( \left| {\operatorname{Arg}\left( s\right) }\right| < \pi - \delta \) .
Exercise 9.13. Define a function \( \nu \) by \( \nu \left( 1\right) = 0 \), and \( \nu \left( n\right) \) is the number of distinct prime divisors of \( n \) for \( n > 1 \) .
(a) Prove that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\nu \left( n\right) }{{n}^{s}} = \zeta \left( s\right) \mathop{\sum }\limits_{{p \in \mathbb{P}}}\frac{1}{{p}^{s}} \) .
(b) Prove that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{2}^{\nu \left( n\right) }}{{n}^{s}} = \frac{{\zeta }^{2}\left( s\right) }{\zeta \left( {2s}\right) } \) .
At the start of this chapter, the idea of "factorizing" functions in the way that polynomials are factorized was discussed. Quite apart from the convergence issues that pervade this topic, infinite products may behave in quite surprising ways, as shown by the next exercise.
Exercise 9.14. Using Exercise 8.11, show that, for any \( x \) with \( \left| x\right| < 1 \) ,
\[
{e}^{x} = \mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - {x}^{n}\right) }^{-\mu \left( n\right) /n}.
\]
The functional equations we have considered in this chapter are analytic properties of known classical functions. The next exercise is (relatively) light relief and is a functional equation in another sense: The unknown solution sought is a function.
Exercise 9.15. *Find the solutions to the functional equation
\[
f\left( {{xz} - y}\right) f\left( x\right) f\left( y\right) + {3f}\left( 0\right) = 1 + {2f}\left( 0\right) f\left( 0\right) + f\left( x\right) f\left( y\right) \text{ for all }x, y, z \in \mathbb{R}.
\]
Does the solution change if the identity is only required to hold for all \( x, y, z \) in \( \mathbb{Z} \) ?
## 9.6.1 Factorizing the Riemann Zeta Function
Several times in this chapter, we have seen a function factorize in a meaningful way into an infinite product of "irreducible" terms corresponding to zeros, corresponding to a function-theoretic version of the Fundamental Theorem of Arithmetic. The Riemann Hypothesis itself can be understood in these terms
- except that the location of the zeros is not known.
Theorem 9.27. [HADAMARD] Let \( \Xi \) denote the set of zeros of the Riemann zeta function in the critical strip \( \{ z \mid 0 < \Re \left( z\right) < 1\} \) . Then
\[
\zeta \left( s\right) = \frac{{e}^{bs}}{2\left( {s - 1}\right) \Gamma \left( {\frac{s}{2} + 1}\right) }\mathop{\prod }\limits_{{\xi \in \Xi }}\left( {1 - \frac{s}{\xi }}\right) {e}^{s/\xi },
\]
where \( b = \log \left( {2\pi }\right) - 1 + \frac{\gamma }{2} \) .
In this theorem, the zeros of the zeta function outside the critical strip are accounted for by the poles of \( \Gamma \left( {\frac{s}{2} + 1}\right) \) .
Exercise 9.16. Assuming the statement of Theorem 9.27 for some constant \( b \) , show that it must have the stated value by using Exercise 9.11.
Notes to Chapter 9: For a very interesting discussion of both the mathematics and the history of the type of analysis used in this chapter, and in particular to gain some insight into how Euler came close to the functional equation, see Hardy's monograph [74]. An elegant guide to classical Fourier analysis may be found in Katznelson’s book [87]. Apéry’s proof that \( \zeta \left( 3\right) \) is irrational appeared in his paper [3]; an accessible account is provided by van der Poorten [118]. More recent results on values of the zeta function at odd integers appear in works by Ball and Rivoal [9] or Rivoal [130] and references therein. The disproof of Merten's conjecture mentioned on p. 186 appears in the paper of Odlyzko and te Riele [114]. A comprehensive guide to many of the analytic arguments here, including Exercises 9.4 and 9.6 is the classic text of Whittaker and Watson [160]. Artin's book [6] is an exceptionally clear account of the main properties of the Gamma function. Deeper properties of the zeta function, emphasizing the role of Poisson summation, may be found in Patterson's book [115]. Several different approaches to the functional equation for the Riemann zeta function appear in the book of Titchmarsh [153]. For a recent overview of the Riemann Hypothesis written by a worker in the field, consult the survey of Conrey [33]. Exercise 9.12 is classical; a proof requiring little background appears in a paper of Beukers, Kolk and Calabi [13] and is discussed in a paper of Elkies [50]. Exercise 9.14 is taken from a paper of Brent [19]. Exercise 9.15 is taken from a paper of Šunik [148].
## Primes in an Arithmetic Progression
We begin with two elementary results and then give more sophisticated proofs of them, suggesting a general method. The algebraic part of this method concerns characters of Abelian groups, the analytic part is a nonvanishing statement about \( L \) -functions. The culmination is Dirichl
|
Corollary 9.25. For all \( s \in \mathbb{C}, s \notin \mathbb{N} \) ,
\[
\Gamma \left( s\right) \Gamma \left( {1 - s}\right) = \frac{\pi }{\sin \left( {\pi s}\right) }.
\]
|
Proof. By Theorem 9.20,
\[
\Gamma \left( s\right) \Gamma \left( {-s}\right) = - \frac{1}{{s}^{2}}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 + \frac{s}{n}\right) }^{-1}{e}^{s/n}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - \frac{s}{n}\right) }^{-1}{e}^{-s/n}
\]
\[
= - \frac{1}{{s}^{2}}\mathop{\prod }\limits_{{n = 1}}^{\infty }{\left( 1 - \frac{{s}^{2}}{{n}^{2}}\right) }^{-1} = - \frac{\pi }{s\sin \left( {\pi s}\right) }
\]
using the classical formula
\[
\sin \left( {\pi s}\right) = {\pi s}\mathop{\prod }\limits_{{n = 1}}^{\infty }\left( {1 - \frac{{s}^{2}}{{n}^{2}}}\right)
\]
The corollary follows because \( - {s\Gamma }\left( {-s}\right) = \Gamma \left( {1 - s}\right) \) .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.