text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Spaces of test functions and distributions
In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued (or sometimes real-valued) functions on a non-empty open subset $U\subseteq \mathbb {R} ^{n}$ that have compact support. The space of all test functions, denoted by $C_{c}^{\infty }(U),$ is endowed with a certain topology, called the canonical LF-topology, that makes $C_{c}^{\infty }(U)$ into a complete Hausdorff locally convex TVS. The strong dual space of $C_{c}^{\infty }(U)$ is called the space of distributions on $U$ and is denoted by ${\mathcal {D}}^{\prime }(U):=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime },$ where the "$b$" subscript indicates that the continuous dual space of $C_{c}^{\infty }(U),$ denoted by $\left(C_{c}^{\infty }(U)\right)^{\prime },$ is endowed with the strong dual topology.
This article is about the topological vector spaces used to define and use Schwartz distributions. For more basic information about distributions and operations on them, see Distribution (mathematics).
There are other possible choices for the space of test functions, which lead to other different spaces of distributions. If $U=\mathbb {R} ^{n}$ then the use of Schwartz functions[note 1] as test functions gives rise to a certain subspace of ${\mathcal {D}}^{\prime }(U)$ whose elements are called tempered distributions. These are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions. The set of tempered distributions forms a vector subspace of the space of distributions ${\mathcal {D}}^{\prime }(U)$ and is thus one example of a space of distributions; there are many other spaces of distributions.
There also exist other major classes of test functions that are not subsets of $C_{c}^{\infty }(U),$ such as spaces of analytic test functions, which produce very different classes of distributions. The theory of such distributions has a different character from the previous one because there are no analytic functions with non-empty compact support.[note 2] Use of analytic test functions leads to Sato's theory of hyperfunctions.
Notation
The following notation will be used throughout this article:
• $n$ is a fixed positive integer and $U$ is a fixed non-empty open subset of Euclidean space $\mathbb {R} ^{n}.$
• $\mathbb {N} =\{0,1,2,\ldots \}$ denotes the natural numbers.
• $k$ will denote a non-negative integer or $\infty .$
• If $f$ is a function then $\operatorname {Dom} (f)$ will denote its domain and the support of $f,$ denoted by $\operatorname {supp} (f),$ is defined to be the closure of the set $\{x\in \operatorname {Dom} (f):f(x)\neq 0\}$ in $\operatorname {Dom} (f).$
• For two functions $f,g:U\to \mathbb {C} $, the following notation defines a canonical pairing:
$\langle f,g\rangle :=\int _{U}f(x)g(x)\,dx.$ :=\int _{U}f(x)g(x)\,dx.}
• A multi-index of size $n$ is an element in $\mathbb {N} ^{n}$ (given that $n$ is fixed, if the size of multi-indices is omitted then the size should be assumed to be $n$). The length of a multi-index $\alpha =(\alpha _{1},\ldots ,\alpha _{n})\in \mathbb {N} ^{n}$ is defined as $\alpha _{1}+\cdots +\alpha _{n}$ and denoted by $|\alpha |.$ Multi-indices are particularly useful when dealing with functions of several variables, in particular we introduce the following notations for a given multi-index $\alpha =(\alpha _{1},\ldots ,\alpha _{n})\in \mathbb {N} ^{n}$:
${\begin{aligned}x^{\alpha }&=x_{1}^{\alpha _{1}}\cdots x_{n}^{\alpha _{n}}\\\partial ^{\alpha }&={\frac {\partial ^{|\alpha |}}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}\end{aligned}}$
We also introduce a partial order of all multi-indices by $\beta \geq \alpha $ if and only if $\beta _{i}\geq \alpha _{i}$ for all $1\leq i\leq n.$ When $\beta \geq \alpha $ we define their multi-index binomial coefficient as:
${\binom {\beta }{\alpha }}:={\binom {\beta _{1}}{\alpha _{1}}}\cdots {\binom {\beta _{n}}{\alpha _{n}}}.$
• $\mathbb {K} $ will denote a certain non-empty collection of compact subsets of $U$ (described in detail below).
Definitions of test functions and distributions
In this section, we will formally define real-valued distributions on U. With minor modifications, one can also define complex-valued distributions, and one can replace $\mathbb {R} ^{n}$ with any (paracompact) smooth manifold.
Notation:
1. Let $k\in \{0,1,2,\ldots ,\infty \}.$
2. Let $C^{k}(U)$ denote the vector space of all k-times continuously differentiable real or complex-valued functions on U.
3. For any compact subset $K\subseteq U,$ let $C^{k}(K)$ and $C^{k}(K;U)$ both denote the vector space of all those functions $f\in C^{k}(U)$ such that $\operatorname {supp} (f)\subseteq K.$
• If $f\in C^{k}(K)$ then the domain of $f$ is U and not K. So although $C^{k}(K)$ depends on both K and U, only K is typically indicated. The justification for this common practice is detailed below. The notation $C^{k}(K;U)$ will only be used when the notation $C^{k}(K)$ risks being ambiguous.
• Every $C^{k}(K)$ contains the constant 0 map, even if $K=\varnothing .$
4. Let $C_{c}^{k}(U)$ denote the set of all $f\in C^{k}(U)$ such that $f\in C^{k}(K)$ for some compact subset K of U.
• Equivalently, $C_{c}^{k}(U)$ is the set of all $f\in C^{k}(U)$ such that $f$ has compact support.
• $C_{c}^{k}(U)$ is equal to the union of all $C^{k}(K)$ as $K$ ranges over $\mathbb {K} .$
• If $f$ is a real-valued function on U, then $f$ is an element of $C_{c}^{k}(U)$ if and only if $f$ is a $C^{k}$ bump function. Every real-valued test function on $U$ is always also a complex-valued test function on $U.$
Note that for all $j,k\in \{0,1,2,\ldots ,\infty \}$ and any compact subsets K and L of U, we have:
${\begin{aligned}C^{k}(K)&\subseteq C_{c}^{k}(U)\subseteq C^{k}(U)\\C^{k}(K)&\subseteq C^{k}(L)&&{\text{ if }}K\subseteq L\\C^{k}(K)&\subseteq C^{j}(K)&&{\text{ if }}j\leq k\\C_{c}^{k}(U)&\subseteq C_{c}^{j}(U)&&{\text{ if }}j\leq k\\C^{k}(U)&\subseteq C^{j}(U)&&{\text{ if }}j\leq k\\\end{aligned}}$
Definition: Elements of $C_{c}^{\infty }(U)$ are called test functions on U and $C_{c}^{\infty }(U)$ is called the space of test function on U. We will use both ${\mathcal {D}}(U)$ and $C_{c}^{\infty }(U)$ to denote this space.
Distributions on U are defined to be the continuous linear functionals on $C_{c}^{\infty }(U)$ when this vector space is endowed with a particular topology called the canonical LF-topology. This topology is unfortunately not easy to define but it is nevertheless still possible to characterize distributions in a way so that no mention of the canonical LF-topology is made.
Proposition: If T is a linear functional on $C_{c}^{\infty }(U)$ then the T is a distribution if and only if the following equivalent conditions are satisfied:
1. For every compact subset $K\subseteq U$ there exist constants $C>0$ and $N\in \mathbb {N} $ (dependent on $K$) such that for all $f\in C^{\infty }(K),$[1]
$|T(f)|\leq C\sup\{|\partial ^{\alpha }f(x)|:x\in U,|\alpha |\leq N\}.$
2. For every compact subset $K\subseteq U$ there exist constants $C>0$ and $N\in \mathbb {N} $ such that for all $f\in C_{c}^{\infty }(U)$ with support contained in $K,$[2]
$|T(f)|\leq C\sup\{|\partial ^{\alpha }f(x)|:x\in K,|\alpha |\leq N\}.$
3. For any compact subset $K\subseteq U$ and any sequence $\{f_{i}\}_{i=1}^{\infty }$ in $C^{\infty }(K),$ if $\{\partial ^{\alpha }f_{i}\}_{i=1}^{\infty }$ converges uniformly to zero on $K$ for all multi-indices $\alpha $, then $T(f_{i})\to 0.$
The above characterizations can be used to determine whether or not a linear functional is a distribution, but more advanced uses of distributions and test functions (such as applications to differential equations) is limited if no topologies are placed on $C_{c}^{\infty }(U)$ and ${\mathcal {D}}(U).$ To define the space of distributions we must first define the canonical LF-topology, which in turn requires that several other locally convex topological vector spaces (TVSs) be defined first. First, a (non-normable) topology on $C^{\infty }(U)$ will be defined, then every $C^{\infty }(K)$ will be endowed with the subspace topology induced on it by $C^{\infty }(U),$ and finally the (non-metrizable) canonical LF-topology on $C_{c}^{\infty }(U)$ will be defined. The space of distributions, being defined as the continuous dual space of $C_{c}^{\infty }(U),$ is then endowed with the (non-metrizable) strong dual topology induced by $C_{c}^{\infty }(U)$ and the canonical LF-topology (this topology is a generalization of the usual operator norm induced topology that is placed on the continuous dual spaces of normed spaces). This finally permits consideration of more advanced notions such as convergence of distributions (both sequences and nets), various (sub)spaces of distributions, and operations on distributions, including extending differential equations to distributions.
Choice of compact sets K
Throughout, $\mathbb {K} $ will be any collection of compact subsets of $U$ such that (1) $ U=\bigcup _{K\in \mathbb {K} }K,$ and (2) for any compact $K\subseteq U$ there exists some $K_{2}\in \mathbb {K} $ such that $K\subseteq K_{2}.$ The most common choices for $\mathbb {K} $ are:
• The set of all compact subsets of $U,$ or
• A set $\left\{{\overline {U}}_{1},{\overline {U}}_{2},\ldots \right\}$ where $ U=\bigcup _{i=1}^{\infty }U_{i},$ and for all i, ${\overline {U}}_{i}\subseteq U_{i+1}$ and $U_{i}$ is a relatively compact non-empty open subset of $U$ (here, "relatively compact" means that the closure of $U_{i},$ in either U or $\mathbb {R} ^{n},$ is compact).
We make $\mathbb {K} $ into a directed set by defining $K_{1}\leq K_{2}$ if and only if $K_{1}\subseteq K_{2}.$ Note that although the definitions of the subsequently defined topologies explicitly reference $\mathbb {K} ,$ in reality they do not depend on the choice of $\mathbb {K} ;$ ;} that is, if $\mathbb {K} _{1}$ and $\mathbb {K} _{2}$ are any two such collections of compact subsets of $U,$ then the topologies defined on $C^{k}(U)$ and $C_{c}^{k}(U)$ by using $\mathbb {K} _{1}$ in place of $\mathbb {K} $ are the same as those defined by using $\mathbb {K} _{2}$ in place of $\mathbb {K} .$
Topology on Ck(U)
We now introduce the seminorms that will define the topology on $C^{k}(U).$ Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used.
Suppose $k\in \{0,1,2,\ldots ,\infty \}$ and $K$ is an arbitrary compact subset of $U.$ Suppose $i$ an integer such that $0\leq i\leq k$[note 3] and $p$ is a multi-index with length $|p|\leq k.$ For $K\neq \varnothing ,$ define:
${\begin{alignedat}{4}{\text{ (1) }}\ &s_{p,K}(f)&&:=\sup _{x_{0}\in K}\left|\partial ^{p}f(x_{0})\right|\\[4pt]{\text{ (2) }}\ &q_{i,K}(f)&&:=\sup _{|p|\leq i}\left(\sup _{x_{0}\in K}\left|\partial ^{p}f(x_{0})\right|\right)=\sup _{|p|\leq i}\left(s_{p,K}(f)\right)\\[4pt]{\text{ (3) }}\ &r_{i,K}(f)&&:=\sup _{\stackrel {|p|\leq i}{x_{0}\in K}}\left|\partial ^{p}f(x_{0})\right|\\[4pt]{\text{ (4) }}\ &t_{i,K}(f)&&:=\sup _{x_{0}\in K}\left(\sum _{|p|\leq i}\left|\partial ^{p}f(x_{0})\right|\right)\end{alignedat}}$
while for $K=\varnothing ,$ define all the functions above to be the constant 0 map.
All of the functions above are non-negative $\mathbb {R} $-valued[note 4] seminorms on $C^{k}(U).$ As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology.
Each of the following sets of seminorms
${\begin{alignedat}{4}A~:=\quad &\{q_{i,K}&&:\;K{\text{ compact and }}\;&&i\in \mathbb {N} {\text{ satisfies }}\;&&0\leq i\leq k\}\\B~:=\quad &\{r_{i,K}&&:\;K{\text{ compact and }}\;&&i\in \mathbb {N} {\text{ satisfies }}\;&&0\leq i\leq k\}\\C~:=\quad &\{t_{i,K}&&:\;K{\text{ compact and }}\;&&i\in \mathbb {N} {\text{ satisfies }}\;&&0\leq i\leq k\}\\D~:=\quad &\{s_{p,K}&&:\;K{\text{ compact and }}\;&&p\in \mathbb {N} ^{n}{\text{ satisfies }}\;&&|p|\leq k\}\end{alignedat}}$
generate the same locally convex vector topology on $C^{k}(U)$ (so for example, the topology generated by the seminorms in $A$ is equal to the topology generated by those in $C$).
The vector space $C^{k}(U)$ is endowed with the locally convex topology induced by any one of the four families $A,B,C,D$ of seminorms described above. This topology is also equal to the vector topology induced by all of the seminorms in $A\cup B\cup C\cup D.$
With this topology, $C^{k}(U)$ becomes a locally convex Fréchet space that is not normable. Every element of $A\cup B\cup C\cup D$ is a continuous seminorm on $C^{k}(U).$ Under this topology, a net $(f_{i})_{i\in I}$ in $C^{k}(U)$ converges to $f\in C^{k}(U)$ if and only if for every multi-index $p$ with $|p|<k+1$ and every compact $K,$ the net of partial derivatives $\left(\partial ^{p}f_{i}\right)_{i\in I}$ converges uniformly to $\partial ^{p}f$ on $K.$[3] For any $k\in \{0,1,2,\ldots ,\infty \},$ any (von Neumann) bounded subset of $C^{k+1}(U)$ is a relatively compact subset of $C^{k}(U).$[4] In particular, a subset of $C^{\infty }(U)$ is bounded if and only if it is bounded in $C^{i}(U)$ for all $i\in \mathbb {N} .$[4] The space $C^{k}(U)$ is a Montel space if and only if $k=\infty .$[5]
The topology on $C^{\infty }(U)$ is the superior limit of the subspace topologies induced on $C^{\infty }(U)$ by the TVSs $C^{i}(U)$ as i ranges over the non-negative integers.[3] A subset $W$ of $C^{\infty }(U)$ is open in this topology if and only if there exists $i\in \mathbb {N} $ such that $W$ is open when $C^{\infty }(U)$ is endowed with the subspace topology induced on it by $C^{i}(U).$
Metric defining the topology
If the family of compact sets $\mathbb {K} =\left\{{\overline {U}}_{1},{\overline {U}}_{2},\ldots \right\}$ satisfies $ U=\bigcup _{j=1}^{\infty }U_{j}$ and ${\overline {U}}_{i}\subseteq U_{i+1}$ for all $i,$ then a complete translation-invariant metric on $C^{\infty }(U)$ can be obtained by taking a suitable countable Fréchet combination of any one of the above defining families of seminorms (A through D). For example, using the seminorms $(r_{i,K_{i}})_{i=1}^{\infty }$ results in the metric
$d(f,g):=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {r_{i,{\overline {U}}_{i}}(f-g)}{1+r_{i,{\overline {U}}_{i}}(f-g)}}=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {\sup _{|p|\leq i,x\in {\overline {U}}_{i}}\left|\partial ^{p}(f-g)(x)\right|}{\left[1+\sup _{|p|\leq i,x\in {\overline {U}}_{i}}\left|\partial ^{p}(f-g)(x)\right|\right]}}.$
Often, it is easier to just consider seminorms (avoiding any metric) and use the tools of functional analysis.
Topology on Ck(K)
As before, fix $k\in \{0,1,2,\ldots ,\infty \}.$ Recall that if $K$ is any compact subset of $U$ then $C^{k}(K)\subseteq C^{k}(U).$
Assumption: For any compact subset $K\subseteq U,$ we will henceforth assume that $C^{k}(K)$ is endowed with the subspace topology it inherits from the Fréchet space $C^{k}(U).$
For any compact subset $K\subseteq U,$ $C^{k}(K)$ is a closed subspace of the Fréchet space $C^{k}(U)$ and is thus also a Fréchet space. For all compact $K,L\subseteq U$ satisfying $K\subseteq L,$ denote the inclusion map by $\operatorname {In} _{K}^{L}:C^{k}(K)\to C^{k}(L).$ Then this map is a linear embedding of TVSs (that is, it is a linear map that is also a topological embedding) whose image (or "range") is closed in its codomain; said differently, the topology on $C^{k}(K)$ is identical to the subspace topology it inherits from $C^{k}(L),$ and also $C^{k}(K)$ is a closed subset of $C^{k}(L).$ The interior of $C^{\infty }(K)$ relative to $C^{\infty }(U)$ is empty.[6]
If $k$ is finite then $C^{k}(K)$ is a Banach space[7] with a topology that can be defined by the norm
$r_{K}(f):=\sup _{|p|<k}\left(\sup _{x_{0}\in K}\left|\partial ^{p}f(x_{0})\right|\right).$
And when $k=2,$ then $\,C^{k}(K)$ is even a Hilbert space.[7] The space $C^{\infty }(K)$ is a distinguished Schwartz Montel space so if $C^{\infty }(K)\neq \{0\}$ then it is not normable and thus not a Banach space (although like all other $C^{k}(K),$ it is a Fréchet space).
Trivial extensions and independence of Ck(K)'s topology from U
The definition of $C^{k}(K)$ depends on U so we will let $C^{k}(K;U)$ denote the topological space $C^{k}(K),$ which by definition is a topological subspace of $C^{k}(U).$ Suppose $V$ is an open subset of $\mathbb {R} ^{n}$ containing $U$ and for any compact subset $K\subseteq V,$ let $C^{k}(K;V)$ is the vector subspace of $C^{k}(V)$ consisting of maps with support contained in $K.$ Given $f\in C_{c}^{k}(U),$ its trivial extension to V is by definition, the function $I(f):=F:V\to \mathbb {C} $ defined by:
$F(x)={\begin{cases}f(x)&x\in U,\\0&{\text{otherwise}},\end{cases}}$
so that $F\in C^{k}(V).$ Let $I:C_{c}^{k}(U)\to C^{k}(V)$ denote the map that sends a function in $C_{c}^{k}(U)$ to its trivial extension on V. This map is a linear injection and for every compact subset $K\subseteq U$ (where $K$ is also a compact subset of $V$ since $K\subseteq U\subseteq V$) we have
${\begin{alignedat}{4}I\left(C^{k}(K;U)\right)&~=~C^{k}(K;V)\qquad {\text{ and thus }}\\I\left(C_{c}^{k}(U)\right)&~\subseteq ~C_{c}^{k}(V)\end{alignedat}}$
If I is restricted to $C^{k}(K;U)$ then the following induced linear map is a homeomorphism (and thus a TVS-isomorphism):
${\begin{alignedat}{4}\,&C^{k}(K;U)&&\to \,&&C^{k}(K;V)\\&f&&\mapsto \,&&I(f)\\\end{alignedat}}$
and thus the next two maps (which like the previous map are defined by $f\mapsto I(f)$) are topological embeddings:
$C^{k}(K;U)\to C^{k}(V),\qquad {\text{ and }}\qquad C^{k}(K;U)\to C_{c}^{k}(V),$
(the topology on $C_{c}^{k}(V)$ is the canonical LF topology, which is defined later). Using the injection
$I:C_{c}^{k}(U)\to C^{k}(V)$
the vector space $C_{c}^{k}(U)$ is canonically identified with its image in $C_{c}^{k}(V)\subseteq C^{k}(V)$ (however, if $U\neq V$ then $I:C_{c}^{\infty }(U)\to C_{c}^{\infty }(V)$ is not a topological embedding when these spaces are endowed with their canonical LF topologies, although it is continuous).[8] Because $C^{k}(K;U)\subseteq C_{c}^{k}(U),$ through this identification, $C^{k}(K;U)$ can also be considered as a subset of $C^{k}(V).$ Importantly, the subspace topology $C^{k}(K;U)$ inherits from $C^{k}(U)$ (when it is viewed as a subset of $C^{k}(U)$) is identical to the subspace topology that it inherits from $C^{k}(V)$ (when $C^{k}(K;U)$ is viewed instead as a subset of $C^{k}(V)$ via the identification). Thus the topology on $C^{k}(K;U)$ is independent of the open subset U of $\mathbb {R} ^{n}$ that contains K.[6] This justifies the practice of written $C^{k}(K)$ instead of $C^{k}(K;U).$
Canonical LF topology
See also: LF-space and Topology of uniform convergence
Recall that $C_{c}^{k}(U)$ denote all those functions in $C^{k}(U)$ that have compact support in $U,$ where note that $C_{c}^{k}(U)$ is the union of all $C^{k}(K)$ as K ranges over $\mathbb {K} .$ Moreover, for every k, $C_{c}^{k}(U)$ is a dense subset of $C^{k}(U).$ The special case when $k=\infty $ gives us the space of test functions.
$C_{c}^{\infty }(U)$ is called the space of test functions on $U$ and it may also be denoted by ${\mathcal {D}}(U).$
This section defines the canonical LF topology as a direct limit. It is also possible to define this topology in terms of its neighborhoods of the origin, which is described afterwards.
Topology defined by direct limits
For any two sets K and L, we declare that $K\leq L$ if and only if $K\subseteq L,$ which in particular makes the collection $\mathbb {K} $ of compact subsets of U into a directed set (we say that such a collection is directed by subset inclusion). For all compact $K,L\subseteq U$ satisfying $K\subseteq L,$ there are inclusion maps
$\operatorname {In} _{K}^{L}:C^{k}(K)\to C^{k}(L)\quad {\text{and}}\quad \operatorname {In} _{K}^{U}:C^{k}(K)\to C_{c}^{k}(U).$
Recall from above that the map $\operatorname {In} _{K}^{L}:C^{k}(K)\to C^{k}(L)$ is a topological embedding. The collection of maps
$\left\{\operatorname {In} _{K}^{L}\;:\;K,L\in \mathbb {K} \;{\text{ and }}\;K\subseteq L\right\}$
forms a direct system in the category of locally convex topological vector spaces that is directed by $\mathbb {K} $ (under subset inclusion). This system's direct limit (in the category of locally convex TVSs) is the pair $(C_{c}^{k}(U),\operatorname {In} _{\bullet }^{U})$ where $\operatorname {In} _{\bullet }^{U}:=\left(\operatorname {In} _{K}^{U}\right)_{K\in \mathbb {K} }$ are the natural inclusions and where $C_{c}^{k}(U)$ is now endowed with the (unique) strongest locally convex topology making all of the inclusion maps $\operatorname {In} _{\bullet }^{U}=(\operatorname {In} _{K}^{U})_{K\in \mathbb {K} }$ continuous.
The canonical LF topology on $C_{c}^{k}(U)$ is the finest locally convex topology on $C_{c}^{k}(U)$ making all of the inclusion maps $\operatorname {In} _{K}^{U}:C^{k}(K)\to C_{c}^{k}(U)$ continuous (where K ranges over $\mathbb {K} $).
As is common in mathematics literature, the space $C_{c}^{k}(U)$ is henceforth assumed to be endowed with its canonical LF topology (unless explicitly stated otherwise).
Topology defined by neighborhoods of the origin
If U is a convex subset of $C_{c}^{k}(U),$ then U is a neighborhood of the origin in the canonical LF topology if and only if it satisfies the following condition:
For all $K\in \mathbb {K} ,$ $U\cap C^{k}(K)$ is a neighborhood of the origin in $C^{k}(K).$
(CN)
Note that any convex set satisfying this condition is necessarily absorbing in $C_{c}^{k}(U).$ Since the topology of any topological vector space is translation-invariant, any TVS-topology is completely determined by the set of neighborhood of the origin. This means that one could actually define the canonical LF topology by declaring that a convex balanced subset U is a neighborhood of the origin if and only if it satisfies condition CN.
Topology defined via differential operators
A linear differential operator in U with smooth coefficients is a sum
$P:=\sum _{\alpha \in \mathbb {N} ^{n}}c_{\alpha }\partial ^{\alpha }$
where $c_{\alpha }\in C^{\infty }(U)$ and all but finitely many of $c_{\alpha }$ are identically 0. The integer $\sup\{|\alpha |:c_{\alpha }\neq 0\}$ is called the order of the differential operator $P.$ If $P$ is a linear differential operator of order k then it induces a canonical linear map $C^{k}(U)\to C^{0}(U)$ defined by $\phi \mapsto P\phi ,$ where we shall reuse notation and also denote this map by $P.$[9]
For any $1\leq k\leq \infty ,$ the canonical LF topology on $C_{c}^{k}(U)$ is the weakest locally convex TVS topology making all linear differential operators in $U$ of order $\,<k+1$ into continuous maps from $C_{c}^{k}(U)$ into $C_{c}^{0}(U).$[9]
Canonical LF topology's independence from K
One benefit of defining the canonical LF topology as the direct limit of a direct system is that we may immediately use the universal property of direct limits. Another benefit is that we can use well-known results from category theory to deduce that the canonical LF topology is actually independent of the particular choice of the directed collection $\mathbb {K} $ of compact sets. And by considering different collections $\mathbb {K} $ (in particular, those $\mathbb {K} $ mentioned at the beginning of this article), we may deduce different properties of this topology. In particular, we may deduce that the canonical LF topology makes $C_{c}^{k}(U)$ into a Hausdorff locally convex strict LF-space (and also a strict LB-space if $k\neq \infty $), which of course is the reason why this topology is called "the canonical LF topology" (see this footnote for more details).[note 5]
Universal property
From the universal property of direct limits, we know that if $u:C_{c}^{k}(U)\to Y$ is a linear map into a locally convex space Y (not necessarily Hausdorff), then u is continuous if and only if u is bounded if and only if for every $K\in \mathbb {K} ,$ the restriction of u to $C^{k}(K)$ is continuous (or bounded).[10][11]
Dependence of the canonical LF topology on U
Suppose V is an open subset of $\mathbb {R} ^{n}$ containing $U.$ Let $I:C_{c}^{k}(U)\to C_{c}^{k}(V)$ denote the map that sends a function in $C_{c}^{k}(U)$ to its trivial extension on V (which was defined above). This map is a continuous linear map.[8] If (and only if) $U\neq V$ then $I\left(C_{c}^{\infty }(U)\right)$ is not a dense subset of $C_{c}^{\infty }(V)$ and $I:C_{c}^{\infty }(U)\to C_{c}^{\infty }(V)$ is not a topological embedding.[8] Consequently, if $U\neq V$ then the transpose of $I:C_{c}^{\infty }(U)\to C_{c}^{\infty }(V)$ is neither one-to-one nor onto.[8]
Bounded subsets
A subset $B\subseteq C_{c}^{k}(U)$ is bounded in $C_{c}^{k}(U)$ if and only if there exists some $K\in \mathbb {K} $ such that $B\subseteq C^{k}(K)$ and $B$ is a bounded subset of $C^{k}(K).$[11] Moreover, if $K\subseteq U$ is compact and $S\subseteq C^{k}(K)$ then $S$ is bounded in $C^{k}(K)$ if and only if it is bounded in $C^{k}(U).$ For any $0\leq k\leq \infty ,$ any bounded subset of $C_{c}^{k+1}(U)$ (resp. $C^{k+1}(U)$) is a relatively compact subset of $C_{c}^{k}(U)$ (resp. $C^{k}(U)$), where $\infty +1=\infty .$[11]
Non-metrizability
For all compact $K\subseteq U,$ the interior of $C^{k}(K)$ in $C_{c}^{k}(U)$ is empty so that $C_{c}^{k}(U)$ is of the first category in itself. It follows from Baire's theorem that $C_{c}^{k}(U)$ is not metrizable and thus also not normable (see this footnote[note 6] for an explanation of how the non-metrizable space $C_{c}^{k}(U)$ can be complete even though it does not admit a metric). The fact that $C_{c}^{\infty }(U)$ is a nuclear Montel space makes up for the non-metrizability of $C_{c}^{\infty }(U)$ (see this footnote for a more detailed explanation).[note 7]
Relationships between spaces
Using the universal property of direct limits and the fact that the natural inclusions $\operatorname {In} _{K}^{L}:C^{k}(K)\to C^{k}(L)$ are all topological embedding, one may show that all of the maps $\operatorname {In} _{K}^{U}:C^{k}(K)\to C_{c}^{k}(U)$ are also topological embeddings. Said differently, the topology on $C^{k}(K)$ is identical to the subspace topology that it inherits from $C_{c}^{k}(U),$ where recall that $C^{k}(K)$'s topology was defined to be the subspace topology induced on it by $C^{k}(U).$ In particular, both $C_{c}^{k}(U)$ and $C^{k}(U)$ induces the same subspace topology on $C^{k}(K).$ However, this does not imply that the canonical LF topology on $C_{c}^{k}(U)$ is equal to the subspace topology induced on $C_{c}^{k}(U)$ by $C^{k}(U)$; these two topologies on $C_{c}^{k}(U)$ are in fact never equal to each other since the canonical LF topology is never metrizable while the subspace topology induced on it by $C^{k}(U)$ is metrizable (since recall that $C^{k}(U)$ is metrizable). The canonical LF topology on $C_{c}^{k}(U)$ is actually strictly finer than the subspace topology that it inherits from $C^{k}(U)$ (thus the natural inclusion $C_{c}^{k}(U)\to C^{k}(U)$ is continuous but not a topological embedding).[7]
Indeed, the canonical LF topology is so fine that if $C_{c}^{\infty }(U)\to X$ denotes some linear map that is a "natural inclusion" (such as $C_{c}^{\infty }(U)\to C^{k}(U),$ or $C_{c}^{\infty }(U)\to L^{p}(U),$ or other maps discussed below) then this map will typically be continuous, which (as is explained below) is ultimately the reason why locally integrable functions, Radon measures, etc. all induce distributions (via the transpose of such a "natural inclusion"). Said differently, the reason why there are so many different ways of defining distributions from other spaces ultimately stems from how very fine the canonical LF topology is. Moreover, since distributions are just continuous linear functionals on $C_{c}^{\infty }(U),$ the fine nature of the canonical LF topology means that more linear functionals on $C_{c}^{\infty }(U)$ end up being continuous ("more" means as compared to a coarser topology that we could have placed on $C_{c}^{\infty }(U)$ such as for instance, the subspace topology induced by some $C^{k}(U),$ which although it would have made $C_{c}^{\infty }(U)$ metrizable, it would have also resulted in fewer linear functionals on $C_{c}^{\infty }(U)$ being continuous and thus there would have been fewer distributions; moreover, this particular coarser topology also has the disadvantage of not making $C_{c}^{\infty }(U)$ into a complete TVS[12]).
Other properties
• The differentiation map $C_{c}^{\infty }(U)\to C_{c}^{\infty }(U)$ is a surjective continuous linear operator.[13]
• The bilinear multiplication map $C^{\infty }(\mathbb {R} ^{m})\times C_{c}^{\infty }(\mathbb {R} ^{n})\to C_{c}^{\infty }(\mathbb {R} ^{m+n})$ given by $(f,g)\mapsto fg$ is not continuous; it is however, hypocontinuous.[14]
Distributions
See also: Continuous linear functional
As discussed earlier, continuous linear functionals on a $C_{c}^{\infty }(U)$ are known as distributions on U. Thus the set of all distributions on U is the continuous dual space of $C_{c}^{\infty }(U),$ which when endowed with the strong dual topology is denoted by ${\mathcal {D}}^{\prime }(U).$
By definition, a distribution on U is defined to be a continuous linear functional on $C_{c}^{\infty }(U).$ Said differently, a distribution on U is an element of the continuous dual space of $C_{c}^{\infty }(U)$ when $C_{c}^{\infty }(U)$ is endowed with its canonical LF topology.
We have the canonical duality pairing between a distribution T on U and a test function $f\in C_{c}^{\infty }(U),$ which is denoted using angle brackets by
${\begin{cases}{\mathcal {D}}^{\prime }(U)\times C_{c}^{\infty }(U)\to \mathbb {R} \\(T,f)\mapsto \langle T,f\rangle :=T(f)\end{cases}}$ :=T(f)\end{cases}}}
One interprets this notation as the distribution T acting on the test function $f$ to give a scalar, or symmetrically as the test function $f$ acting on the distribution T.
Characterizations of distributions
Proposition. If T is a linear functional on $C_{c}^{\infty }(U)$ then the following are equivalent:
1. T is a distribution;
2. Definition : T is a continuous function.
3. T is continuous at the origin.
4. T is uniformly continuous.
5. T is a bounded operator.
6. T is sequentially continuous.
• explicitly, for every sequence $\left(f_{i}\right)_{i=1}^{\infty }$ in $C_{c}^{\infty }(U)$ that converges in $C_{c}^{\infty }(U)$ to some $f\in C_{c}^{\infty }(U),$ $\lim _{i\to \infty }T\left(f_{i}\right)=T(f);$[note 8]
7. T is sequentially continuous at the origin; in other words, T maps null sequences[note 9] to null sequences.
• explicitly, for every sequence $\left(f_{i}\right)_{i=1}^{\infty }$ in $C_{c}^{\infty }(U)$ that converges in $C_{c}^{\infty }(U)$ to the origin (such a sequence is called a null sequence), $\lim _{i\to \infty }T\left(f_{i}\right)=0.$
• a null sequence is by definition a sequence that converges to the origin.
8. T maps null sequences to bounded subsets.
• explicitly, for every sequence $\left(f_{i}\right)_{i=1}^{\infty }$ in $C_{c}^{\infty }(U)$ that converges in $C_{c}^{\infty }(U)$ to the origin, the sequence $\left(T\left(f_{i}\right)\right)_{i=1}^{\infty }$ is bounded.
9. T maps Mackey convergent null sequences[note 10] to bounded subsets;
• explicitly, for every Mackey convergent null sequence $\left(f_{i}\right)_{i=1}^{\infty }$ in $C_{c}^{\infty }(U),$ the sequence $\left(T\left(f_{i}\right)\right)_{i=1}^{\infty }$ is bounded.
• a sequence $f_{\bullet }=\left(f_{i}\right)_{i=1}^{\infty }$ is said to be Mackey convergent to 0 if there exists a divergent sequence $r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }\to \infty $ of positive real number such that the sequence $\left(r_{i}f_{i}\right)_{i=1}^{\infty }$ is bounded; every sequence that is Mackey convergent to 0 necessarily converges to the origin (in the usual sense).
10. The kernel of T is a closed subspace of $C_{c}^{\infty }(U).$
11. The graph of T is closed.
12. There exists a continuous seminorm $g$ on $C_{c}^{\infty }(U)$ such that $|T|\leq g.$
13. There exists a constant $C>0,$ a collection of continuous seminorms, ${\mathcal {P}},$ that defines the canonical LF topology of $C_{c}^{\infty }(U),$ and a finite subset $\left\{g_{1},\ldots ,g_{m}\right\}\subseteq {\mathcal {P}}$ such that $|T|\leq C(g_{1}+\cdots g_{m});$[note 11]
14. For every compact subset $K\subseteq U$ there exist constants $C>0$ and $N\in \mathbb {N} $ such that for all $f\in C^{\infty }(K),$[1]
$|T(f)|\leq C\sup\{|\partial ^{p}f(x)|:x\in U,|\alpha |\leq N\}.$
15. For every compact subset $K\subseteq U$ there exist constants $C_{K}>0$ and $N_{K}\in \mathbb {N} $ such that for all $f\in C_{c}^{\infty }(U)$ with support contained in $K,$[2]
$|T(f)|\leq C_{K}\sup\{\left|\partial ^{\alpha }f(x)\right|:x\in K,|\alpha |\leq N_{K}\}.$
16. For any compact subset $K\subseteq U$ and any sequence $\{f_{i}\}_{i=1}^{\infty }$ in $C^{\infty }(K),$ if $\{\partial ^{p}f_{i}\}_{i=1}^{\infty }$ converges uniformly to zero for all multi-indices $p,$ then $T(f_{i})\to 0.$
17. Any of the three statements immediately above (that is, statements 14, 15, and 16) but with the additional requirement that compact set $K$ belongs to $\mathbb {K} .$
Topology on the space of distributions
See also: Strong dual space, Polar topology, Dual topology, and Dual system
Definition and notation: The space of distributions on U, denoted by ${\mathcal {D}}^{\prime }(U),$ is the continuous dual space of $C_{c}^{\infty }(U)$ endowed with the topology of uniform convergence on bounded subsets of $C_{c}^{\infty }(U).$[7] More succinctly, the space of distributions on U is ${\mathcal {D}}^{\prime }(U):=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }.$
The topology of uniform convergence on bounded subsets is also called the strong dual topology.[note 12] This topology is chosen because it is with this topology that ${\mathcal {D}}^{\prime }(U)$ becomes a nuclear Montel space and it is with this topology that the kernels theorem of Schwartz holds.[15] No matter what dual topology is placed on ${\mathcal {D}}^{\prime }(U),$[note 13] a sequence of distributions converges in this topology if and only if it converges pointwise (although this need not be true of a net). No matter which topology is chosen, ${\mathcal {D}}^{\prime }(U)$ will be a non-metrizable, locally convex topological vector space. The space ${\mathcal {D}}^{\prime }(U)$ is separable[16] and has the strong Pytkeev property[17] but it is neither a k-space[17] nor a sequential space,[16] which in particular implies that it is not metrizable and also that its topology can not be defined using only sequences.
Topological vector space categories
The canonical LF topology makes $C_{c}^{k}(U)$ into a complete distinguished strict LF-space (and a strict LB-space if and only if $k\neq \infty $[18]), which implies that $C_{c}^{k}(U)$ is a meager subset of itself.[19] Furthermore, $C_{c}^{k}(U),$ as well as its strong dual space, is a complete Hausdorff locally convex barrelled bornological Mackey space. The strong dual of $C_{c}^{k}(U)$ is a Fréchet space if and only if $k\neq \infty $ so in particular, the strong dual of $C_{c}^{\infty }(U),$ which is the space ${\mathcal {D}}^{\prime }(U)$ of distributions on U, is not metrizable (note that the weak-* topology on ${\mathcal {D}}^{\prime }(U)$ also is not metrizable and moreover, it further lacks almost all of the nice properties that the strong dual topology gives ${\mathcal {D}}^{\prime }(U)$).
The three spaces $C_{c}^{\infty }(U),$ $C^{\infty }(U),$ and the Schwartz space ${\mathcal {S}}(\mathbb {R} ^{n}),$ as well as the strong duals of each of these three spaces, are complete nuclear[20] Montel[21] bornological spaces, which implies that all six of these locally convex spaces are also paracompact[22] reflexive barrelled Mackey spaces. The spaces $C^{\infty }(U)$ and ${\mathcal {S}}(\mathbb {R} ^{n})$ are both distinguished Fréchet spaces. Moreover, both $C_{c}^{\infty }(U)$ and ${\mathcal {S}}(\mathbb {R} ^{n})$ are Schwartz TVSs.
Convergent sequences and their insufficiency to describe topologies
The strong dual spaces of $C^{\infty }(U)$ and ${\mathcal {S}}(\mathbb {R} ^{n})$ are sequential spaces but not Fréchet-Urysohn spaces.[16] Moreover, neither the space of test functions $C_{c}^{\infty }(U)$ nor its strong dual ${\mathcal {D}}^{\prime }(U)$ is a sequential space (not even an Ascoli space),[16][23] which in particular implies that their topologies can not be defined entirely in terms of convergent sequences.
A sequence $\left(f_{i}\right)_{i=1}^{\infty }$ in $C_{c}^{k}(U)$ converges in $C_{c}^{k}(U)$ if and only if there exists some $K\in \mathbb {K} $ such that $C^{k}(K)$ contains this sequence and this sequence converges in $C^{k}(K)$; equivalently, it converges if and only if the following two conditions hold:[24]
1. There is a compact set $K\subseteq U$ containing the supports of all $f_{i}.$
2. For each multi-index $\alpha ,$ the sequence of partial derivatives $\partial ^{\alpha }f_{i}$ tends uniformly to $\partial ^{\alpha }f.$
Neither the space $C_{c}^{\infty }(U)$ nor its strong dual ${\mathcal {D}}^{\prime }(U)$ is a sequential space,[16][23] and consequently, their topologies can not be defined entirely in terms of convergent sequences. For this reason, the above characterization of when a sequence converges is not enough to define the canonical LF topology on $C_{c}^{\infty }(U).$ The same can be said of the strong dual topology on ${\mathcal {D}}^{\prime }(U).$
What sequences do characterize
Nevertheless, sequences do characterize many important properties, as we now discuss. It is known that in the dual space of any Montel space, a sequence converges in the strong dual topology if and only if it converges in the weak* topology,[25] which in particular, is the reason why a sequence of distributions converges (in the strong dual topology) if and only if it converges pointwise (this leads many authors to use pointwise convergence to actually define the convergence of a sequence of distributions; this is fine for sequences but it does not extend to the convergence of nets of distributions since a net may converge pointwise but fail to converge in the strong dual topology).
Sequences characterize continuity of linear maps valued in locally convex space. Suppose X is a locally convex bornological space (such as any of the six TVSs mentioned earlier). Then a linear map $F:X\to Y$ into a locally convex space Y is continuous if and only if it maps null sequences[note 9] in X to bounded subsets of Y.[note 14] More generally, such a linear map $F:X\to Y$ is continuous if and only if it maps Mackey convergent null sequences[note 10] to bounded subsets of $Y.$ So in particular, if a linear map $F:X\to Y$ into a locally convex space is sequentially continuous at the origin then it is continuous.[26] However, this does not necessarily extend to non-linear maps and/or to maps valued in topological spaces that are not locally convex TVSs.
For every $k\in \{0,1,\ldots ,\infty \},C_{c}^{\infty }(U)$ is sequentially dense in $C_{c}^{k}(U).$[27] Furthermore, $\{D_{\phi }:\phi \in C_{c}^{\infty }(U)\}$ is a sequentially dense subset of ${\mathcal {D}}^{\prime }(U)$ (with its strong dual topology)[28] and also a sequentially dense subset of the strong dual space of $C^{\infty }(U).$[28]
Sequences of distributions
Main article: Limit of distributions
A sequence of distributions $(T_{i})_{i=1}^{\infty }$ converges with respect to the weak-* topology on ${\mathcal {D}}^{\prime }(U)$ to a distribution T if and only if
$\langle T_{i},f\rangle \to \langle T,f\rangle $
for every test function $f\in {\mathcal {D}}(U).$ For example, if $f_{m}:\mathbb {R} \to \mathbb {R} $ is the function
$f_{m}(x)={\begin{cases}m&{\text{ if }}x\in [0,{\frac {1}{m}}]\\0&{\text{ otherwise }}\end{cases}}$
and $T_{m}$ is the distribution corresponding to $f_{m},$ then
$\langle T_{m},f\rangle =m\int _{0}^{\frac {1}{m}}f(x)\,dx\to f(0)=\langle \delta ,f\rangle $
as $m\to \infty ,$ so $T_{m}\to \delta $ in ${\mathcal {D}}^{\prime }(\mathbb {R} ).$ Thus, for large $m,$ the function $f_{m}$ can be regarded as an approximation of the Dirac delta distribution.
Other properties
• The strong dual space of ${\mathcal {D}}^{\prime }(U)$ is TVS isomorphic to $C_{c}^{\infty }(U)$ via the canonical TVS-isomorphism $C_{c}^{\infty }(U)\to ({\mathcal {D}}^{\prime }(U))'_{b}$ defined by sending $f\in C_{c}^{\infty }(U)$ to value at $f$ (that is, to the linear functional on ${\mathcal {D}}^{\prime }(U)$ defined by sending $d\in {\mathcal {D}}^{\prime }(U)$ to $d(f)$);
• On any bounded subset of ${\mathcal {D}}^{\prime }(U),$ the weak and strong subspace topologies coincide; the same is true for $C_{c}^{\infty }(U)$;
• Every weakly convergent sequence in ${\mathcal {D}}^{\prime }(U)$ is strongly convergent (although this does not extend to nets).
Localization of distributions
Preliminaries: Transpose of a linear operator
Main article: Transpose of a linear map
Operations on distributions and spaces of distributions are often defined by means of the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well known in functional analysis.[29] For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general the transpose of a continuous linear map $A:X\to Y$ is the linear map
${}^{t}A:Y'\to X'\qquad {\text{ defined by }}\qquad {}^{t}A(y'):=y'\circ A,$
or equivalently, it is the unique map satisfying $\langle y',A(x)\rangle =\left\langle {}^{t}A(y'),x\right\rangle $ for all $x\in X$ and all $y'\in Y'$ (the prime symbol in $y'$ does not denote a derivative of any kind; it merely indicates that $y'$ is an element of the continuous dual space $Y'$). Since $A$ is continuous, the transpose ${}^{t}A:Y'\to X'$ is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details).
In the context of distributions, the characterization of the transpose can be refined slightly. Let $A:{\mathcal {D}}(U)\to {\mathcal {D}}(U)$ be a continuous linear map. Then by definition, the transpose of $A$ is the unique linear operator $A^{t}:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)$ that satisfies:
$\langle {}^{t}A(T),\phi \rangle =\langle T,A(\phi )\rangle \quad {\text{ for all }}\phi \in {\mathcal {D}}(U){\text{ and all }}T\in {\mathcal {D}}'(U).$
Since ${\mathcal {D}}(U)$ is dense in ${\mathcal {D}}'(U)$ (here, ${\mathcal {D}}(U)$ actually refers to the set of distributions $\left\{D_{\psi }:\psi \in {\mathcal {D}}(U)\right\}$) it is sufficient that the defining equality hold for all distributions of the form $T=D_{\psi }$ where $\psi \in {\mathcal {D}}(U).$ Explicitly, this means that a continuous linear map $B:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)$ is equal to ${}^{t}A$ if and only if the condition below holds:
$\langle B(D_{\psi }),\phi \rangle =\langle {}^{t}A(D_{\psi }),\phi \rangle \quad {\text{ for all }}\phi ,\psi \in {\mathcal {D}}(U)$
where the right hand side equals $\langle {}^{t}A(D_{\psi }),\phi \rangle =\langle D_{\psi },A(\phi )\rangle =\langle \psi ,A(\phi )\rangle =\int _{U}\psi \cdot A(\phi )\,dx.$
Extensions and restrictions to an open subset
Let $V\subseteq U$ be open subsets of $\mathbb {R} ^{n}.$ Every function $f\in {\mathcal {D}}(V)$ can be extended by zero from its domain $V$ to a function on $U$ by setting it equal to $0$ on the complement $U\setminus V.$ This extension is a smooth compactly supported function called the trivial extension of $f$ to $U$ and it will be denoted by $E_{VU}(f).$ This assignment $f\mapsto E_{VU}(f)$ defines the trivial extension operator $E_{VU}:{\mathcal {D}}(V)\to {\mathcal {D}}(U),$ which is a continuous injective linear map. It is used to canonically identify ${\mathcal {D}}(V)$ as a vector subspace of ${\mathcal {D}}(U)$ (although not as a topological subspace). Its transpose (explained here)
$\rho _{VU}:={}^{t}E_{VU}:{\mathcal {D}}'(U)\to {\mathcal {D}}'(V),$
is called the restriction to $V$ of distributions in $U$[8] and as the name suggests, the image $\rho _{VU}(T)$ of a distribution $T\in {\mathcal {D}}'(U)$ under this map is a distribution on $V$ called the restriction of $T$ to $V.$ The defining condition of the restriction $\rho _{VU}(T)$ is:
$\langle \rho _{VU}T,\phi \rangle =\langle T,E_{VU}\phi \rangle \quad {\text{ for all }}\phi \in {\mathcal {D}}(V).$
If $V\neq U$ then the (continuous injective linear) trivial extension map $E_{VU}:{\mathcal {D}}(V)\to {\mathcal {D}}(U)$ is not a topological embedding (in other words, if this linear injection was used to identify ${\mathcal {D}}(V)$ as a subset of ${\mathcal {D}}(U)$ then ${\mathcal {D}}(V)$'s topology would strictly finer than the subspace topology that ${\mathcal {D}}(U)$ induces on it; importantly, it would not be a topological subspace since that requires equality of topologies) and its range is also not dense in its codomain ${\mathcal {D}}(U).$[8] Consequently, if $V\neq U$ then the restriction mapping is neither injective nor surjective.[8] A distribution $S\in {\mathcal {D}}'(V)$ is said to be extendible to U if it belongs to the range of the transpose of $E_{VU}$ and it is called extendible if it is extendable to $\mathbb {R} ^{n}.$[8]
Unless $U=V,$ the restriction to $V$ is neither injective nor surjective.
Spaces of distributions
For all $0<k<\infty $ and all $1<p<\infty ,$ all of the following canonical injections are continuous and have an image/range that is a dense subset of their codomain:[30][31]
${\begin{matrix}C_{c}^{\infty }(U)&\to &C_{c}^{k}(U)&\to &C_{c}^{0}(U)&\to &L_{c}^{\infty }(U)&\to &L_{c}^{p+1}(U)&\to &L_{c}^{p}(U)&\to &L_{c}^{1}(U)\\\downarrow &&\downarrow &&\downarrow &&&&&&&&&&\\C^{\infty }(U)&\to &C^{k}(U)&\to &C^{0}(U)&&&&&&&&&&\end{matrix}}$
where the topologies on the LB-spaces $L_{c}^{p}(U)$ are the canonical LF topologies as defined below (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in the codomain. Indeed, $C_{c}^{\infty }(U)$ is even sequentially dense in every $C_{c}^{k}(U).$[27] For every $1\leq p\leq \infty ,$ the canonical inclusion $C_{c}^{\infty }(U)\to L^{p}(U)$ into the normed space $L^{p}(U)$ (here $L^{p}(U)$ has its usual norm topology) is a continuous linear injection and the range of this injection is dense in its codomain if and only if $p\neq \infty $ .[31]
Suppose that $X$ is one of the LF-spaces $C_{c}^{k}(U)$ (for $k\in \{0,1,\ldots ,\infty \}$) or LB-spaces $L_{c}^{p}(U)$ (for $1\leq p\leq \infty $) or normed spaces $L^{p}(U)$ (for $1\leq p<\infty $).[31] Because the canonical injection $\operatorname {In} _{X}:C_{c}^{\infty }(U)\to X$ is a continuous injection whose image is dense in the codomain, this map's transpose ${}^{t}\operatorname {In} _{X}:X'_{b}\to {\mathcal {D}}'(U)=\left(C_{c}^{\infty }(U)\right)'_{b}$ is a continuous injection. This injective transpose map thus allows the continuous dual space $X'$ of $X$ to be identified with a certain vector subspace of the space ${\mathcal {D}}'(U)$ of all distributions (specifically, it is identified with the image of this transpose map). This continuous transpose map is not necessarily a TVS-embedding so the topology that this map transfers from its domain to the image $\operatorname {Im} \left({}^{t}\operatorname {In} _{X}\right)$ is finer than the subspace topology that this space inherits from ${\mathcal {D}}^{\prime }(U).$ A linear subspace of ${\mathcal {D}}^{\prime }(U)$ carrying a locally convex topology that is finer than the subspace topology induced by ${\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }$ is called a space of distributions.[32] Almost all of the spaces of distributions mentioned in this article arise in this way (e.g. tempered distribution, restrictions, distributions of order $\leq $ some integer, distributions induced by a positive Radon measure, distributions induced by an $L^{p}$-function, etc.) and any representation theorem about the dual space of X may, through the transpose ${}^{t}\operatorname {In} _{X}:X'_{b}\to {\mathcal {D}}^{\prime }(U),$ be transferred directly to elements of the space $\operatorname {Im} \left({}^{t}\operatorname {In} _{X}\right).$
Compactly supported Lp-spaces
Given $1\leq p\leq \infty ,$ the vector space $L_{c}^{p}(U)$ of compactly supported $L^{p}$ functions on $U$ and its topology are defined as direct limits of the spaces $L_{c}^{p}(K)$ in a manner analogous to how the canonical LF-topologies on $C_{c}^{k}(U)$ were defined. For any compact $K\subseteq U,$ let $L^{p}(K)$ denote the set of all element in $L^{p}(U)$ (which recall are equivalence class of Lebesgue measurable $L^{p}$ functions on $U$) having a representative $f$ whose support (which recall is the closure of $\{u\in U:f(x)\neq 0\}$ in $U$) is a subset of $K$ (such an $f$ is almost everywhere defined in $K$). The set $L^{p}(K)$ is a closed vector subspace $L^{p}(U)$ and is thus a Banach space and when $p=2,$ even a Hilbert space.[30] Let $L_{c}^{p}(U)$ be the union of all $L^{p}(K)$ as $K\subseteq U$ ranges over all compact subsets of $U.$ The set $L_{c}^{p}(U)$ is a vector subspace of $L^{p}(U)$ whose elements are the (equivalence classes of) compactly supported $L^{p}$ functions defined on $U$ (or almost everywhere on $U$). Endow $L_{c}^{p}(U)$ with the final topology (direct limit topology) induced by the inclusion maps $L^{p}(K)\to L_{c}^{p}(U)$ as $K\subseteq U$ ranges over all compact subsets of $U.$ This topology is called the canonical LF topology and it is equal to the final topology induced by any countable set of inclusion maps $L^{p}(K_{n})\to L_{c}^{p}(U)$ ($n=1,2,\ldots $) where $K_{1}\subseteq K_{2}\subseteq \cdots $ are any compact sets with union equal to $U.$[30] This topology makes $L_{c}^{p}(U)$ into an LB-space (and thus also an LF-space) with a topology that is strictly finer than the norm (subspace) topology that $L^{p}(U)$ induces on it.
Radon measures
The inclusion map $\operatorname {In} :C_{c}^{\infty }(U)\to C_{c}^{0}(U)$ is a continuous injection whose image is dense in its codomain, so the transpose ${}^{t}\operatorname {In} :\left(C_{c}^{0}(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }$ :\left(C_{c}^{0}(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }} is also a continuous injection.
Note that the continuous dual space $\left(C_{c}^{0}(U)\right)_{b}^{\prime }$ can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals $T\in \left(C_{c}^{0}(U)\right)_{b}^{\prime }$ and integral with respect to a Radon measure; that is,
• if $T\in \left(C_{c}^{0}(U)\right)_{b}^{\prime }$ then there exists a Radon measure $\mu $ on U such that for all $f\in C_{c}^{0}(U),T(f)=\textstyle \int _{U}f\,d\mu ,$ and
• if $\mu $ is a Radon measure on U then the linear functional on $C_{c}^{0}(U)$ defined by $C_{c}^{0}(U)\ni f\mapsto \textstyle \int _{U}f\,d\mu $ is continuous.
Through the injection ${}^{t}\operatorname {In} :\left(C_{c}^{0}(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U),$ :\left(C_{c}^{0}(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U),} every Radon measure becomes a distribution on U. If $f$ is a locally integrable function on U then the distribution $\phi \mapsto \textstyle \int _{U}f(x)\phi (x)\,dx$ is a Radon measure; so Radon measures form a large and important space of distributions.
The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally $L^{\infty }$ functions in U :
Theorem.[33] — Suppose $T\in {\mathcal {D}}'(U)$ is a Radon measure, where $U\subseteq \mathbb {R} ^{n},$ let $V\subseteq U$ be a neighborhood of the support of $T,$ and let $I=\{p\in \mathbb {N} ^{n}:|p|\leq n\}.$ There exists a family $f=(f_{p})_{p\in I}$ of locally $L^{\infty }$ functions on U such that $\operatorname {supp} f_{p}\subseteq V$ for every $p\in I,$ and
$T=\sum _{p\in I}\partial ^{p}f_{p}.$
Furthermore, $T$ is also equal to a finite sum of derivatives of continuous functions on $U,$ where each derivative has order $\leq 2n.$
Positive Radon measures
A linear function T on a space of functions is called positive if whenever a function $f$ that belongs to the domain of T is non-negative (meaning that $f$ is real-valued and $f\geq 0$) then $T(f)\geq 0.$ One may show that every positive linear functional on $C_{c}^{0}(U)$ is necessarily continuous (that is, necessarily a Radon measure).[34] Lebesgue measure is an example of a positive Radon measure.
Locally integrable functions as distributions
One particularly important class of Radon measures are those that are induced locally integrable functions. The function $f:U\to \mathbb {R} $ is called locally integrable if it is Lebesgue integrable over every compact subset K of U.[note 15] This is a large class of functions which includes all continuous functions and all Lp space $L^{p}$ functions. The topology on ${\mathcal {D}}(U)$ is defined in such a fashion that any locally integrable function $f$ yields a continuous linear functional on ${\mathcal {D}}(U)$ – that is, an element of ${\mathcal {D}}^{\prime }(U)$ – denoted here by $T_{f}$, whose value on the test function $\phi $ is given by the Lebesgue integral:
$\langle T_{f},\phi \rangle =\int _{U}f\phi \,dx.$
Conventionally, one abuses notation by identifying $T_{f}$ with $f,$ provided no confusion can arise, and thus the pairing between $T_{f}$ and $\phi $ is often written
$\langle f,\phi \rangle =\langle T_{f},\phi \rangle .$
If $f$ and g are two locally integrable functions, then the associated distributions $T_{f}$ and Tg are equal to the same element of ${\mathcal {D}}^{\prime }(U)$ if and only if $f$ and g are equal almost everywhere (see, for instance, Hörmander (1983, Theorem 1.2.5)). In a similar manner, every Radon measure $\mu $ on U defines an element of ${\mathcal {D}}^{\prime }(U)$ whose value on the test function $\phi $ is $\textstyle \int \phi \,d\mu .$ As above, it is conventional to abuse notation and write the pairing between a Radon measure $\mu $ and a test function $\phi $ as $\langle \mu ,\phi \rangle .$ Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure.
Test functions as distributions
The test functions are themselves locally integrable, and so define distributions. The space of test functions $C_{c}^{\infty }(U)$ is sequentially dense in ${\mathcal {D}}^{\prime }(U)$ with respect to the strong topology on ${\mathcal {D}}^{\prime }(U).$[28] This means that for any $T\in {\mathcal {D}}^{\prime }(U),$ there is a sequence of test functions, $(\phi _{i})_{i=1}^{\infty },$ that converges to $T\in {\mathcal {D}}^{\prime }(U)$ (in its strong dual topology) when considered as a sequence of distributions. Or equivalently,
$\langle \phi _{i},\psi \rangle \to \langle T,\psi \rangle \qquad {\text{ for all }}\psi \in {\mathcal {D}}(U).$
Furthermore, $C_{c}^{\infty }(U)$ is also sequentially dense in the strong dual space of $C^{\infty }(U).$[28]
Distributions with compact support
The inclusion map $\operatorname {In} :C_{c}^{\infty }(U)\to C^{\infty }(U)$ is a continuous injection whose image is dense in its codomain, so the transpose ${}^{t}\operatorname {In} :\left(C^{\infty }(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }$ :\left(C^{\infty }(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }} is also a continuous injection. Thus the image of the transpose, denoted by ${\mathcal {E}}^{\prime }(U),$ forms a space of distributions when it is endowed with the strong dual topology of $\left(C^{\infty }(U)\right)_{b}^{\prime }$ (transferred to it via the transpose map ${}^{t}\operatorname {In} :\left(C^{\infty }(U)\right)_{b}^{\prime }\to {\mathcal {E}}^{\prime }(U),$ :\left(C^{\infty }(U)\right)_{b}^{\prime }\to {\mathcal {E}}^{\prime }(U),} so the topology of ${\mathcal {E}}^{\prime }(U)$ is finer than the subspace topology that this set inherits from ${\mathcal {D}}^{\prime }(U)$).[35]
The elements of ${\mathcal {E}}^{\prime }(U)=\left(C^{\infty }(U)\right)_{b}^{\prime }$ can be identified as the space of distributions with compact support.[35] Explicitly, if T is a distribution on U then the following are equivalent,
• $T\in {\mathcal {E}}^{\prime }(U)$;
• the support of T is compact;
• the restriction of $T$ to $C_{c}^{\infty }(U),$ when that space is equipped with the subspace topology inherited from $C^{\infty }(U)$ (a coarser topology than the canonical LF topology), is continuous;[35]
• there is a compact subset K of U such that for every test function $\phi $ whose support is completely outside of K, we have $T(\phi )=0.$
Compactly supported distributions define continuous linear functionals on the space $C^{\infty }(U)$; recall that the topology on $C^{\infty }(U)$ is defined such that a sequence of test functions $\phi _{k}$ converges to 0 if and only if all derivatives of $\phi _{k}$ converge uniformly to 0 on every compact subset of U. Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from $C_{c}^{\infty }(U)$ to $C^{\infty }(U).$
Distributions of finite order
Let $k\in \mathbb {N} .$ The inclusion map $\operatorname {In} :C_{c}^{\infty }(U)\to C_{c}^{k}(U)$ is a continuous injection whose image is dense in its codomain, so the transpose ${}^{t}\operatorname {In} :\left(C_{c}^{k}(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }$ :\left(C_{c}^{k}(U)\right)_{b}^{\prime }\to {\mathcal {D}}^{\prime }(U)=\left(C_{c}^{\infty }(U)\right)_{b}^{\prime }} is also a continuous injection. Consequently, the image of ${}^{t}\operatorname {In} ,$ denoted by ${\mathcal {D}}'^{k}(U),$ forms a space of distributions when it is endowed with the strong dual topology of $\left(C_{c}^{k}(U)\right)_{b}^{\prime }$ (transferred to it via the transpose map ${}^{t}\operatorname {In} :\left(C^{\infty }(U)\right)_{b}^{\prime }\to {\mathcal {D}}'^{k}(U),$ :\left(C^{\infty }(U)\right)_{b}^{\prime }\to {\mathcal {D}}'^{k}(U),} so ${\mathcal {D}}'^{m}(U)$'s topology is finer than the subspace topology that this set inherits from ${\mathcal {D}}^{\prime }(U)$). The elements of ${\mathcal {D}}'^{k}(U)$ are the distributions of order $\,\leq k.$[36] The distributions of order $\,\leq 0,$ which are also called distributions of order $0,$ are exactly the distributions that are Radon measures (described above).
For $0\neq k\in \mathbb {N} ,$ a distribution of order $k$ is a distribution of order $\,\leq k$ that is not a distribution of order $\,\leq k-1$[36]
A distribution is said to be of finite order if there is some integer k such that it is a distribution of order $\,\leq k,$ and the set of distributions of finite order is denoted by ${\mathcal {D}}'^{F}(U).$ Note that if $k\leq 1$ then ${\mathcal {D}}'^{k}(U)\subseteq {\mathcal {D}}'^{l}(U)$ so that ${\mathcal {D}}'^{F}(U)$ is a vector subspace of ${\mathcal {D}}^{\prime }(U)$ and furthermore, if and only if ${\mathcal {D}}'^{F}(U)={\mathcal {D}}^{\prime }(U).$[36]
Structure of distributions of finite order
Every distribution with compact support in U is a distribution of finite order.[36] Indeed, every distribution in U is locally a distribution of finite order, in the following sense:[36] If V is an open and relatively compact subset of U and if $\rho _{VU}$ is the restriction mapping from U to V, then the image of ${\mathcal {D}}^{\prime }(U)$ under $\rho _{VU}$ is contained in ${\mathcal {D}}'^{F}(V).$
The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures:
Theorem[36] — Suppose $T\in {\mathcal {D}}^{\prime }(U)$ has finite order and $I=\{p\in \mathbb {N} ^{n}:|p|\leq k\}.$ Given any open subset V of U containing the support of T, there is a family of Radon measures in U, $(\mu _{p})_{p\in I},$ such that for very $p\in I,\operatorname {supp} (\mu _{p})\subseteq V$ and
$T=\sum _{|p|\leq k}\partial ^{p}\mu _{p}.$
Example. (Distributions of infinite order) Let $U:=(0,\infty )$ and for every test function $f,$ let
$Sf:=\sum _{m=1}^{\infty }(\partial ^{m}f)\left({\frac {1}{m}}\right).$
Then S is a distribution of infinite order on U. Moreover, S can not be extended to a distribution on $\mathbb {R} $; that is, there exists no distribution T on $\mathbb {R} $ such that the restriction of T to U is equal to T.[37]
Tempered distributions and Fourier transform
Defined below are the tempered distributions, which form a subspace of ${\mathcal {D}}^{\prime }(\mathbb {R} ^{n}),$ the space of distributions on $\mathbb {R} ^{n}.$ This is a proper subspace: while every tempered distribution is a distribution and an element of ${\mathcal {D}}^{\prime }(\mathbb {R} ^{n}),$ the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in ${\mathcal {D}}^{\prime }(\mathbb {R} ^{n}).$
Schwartz space
The Schwartz space, ${\mathcal {S}}(\mathbb {R} ^{n}),$ is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus $\phi :\mathbb {R} ^{n}\to \mathbb {R} $ :\mathbb {R} ^{n}\to \mathbb {R} } is in the Schwartz space provided that any derivative of $\phi ,$ multiplied with any power of $|x|,$ converges to 0 as $|x|\to \infty .$ These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices $\alpha $ and $\beta $ define:
$p_{\alpha ,\beta }(\phi )~=~\sup _{x\in \mathbb {R} ^{n}}\left|x^{\alpha }\partial ^{\beta }\phi (x)\right|.$
Then $\phi $ is in the Schwartz space if all the values satisfy:
$p_{\alpha ,\beta }(\phi )<\infty .$
The family of seminorms $p_{\alpha ,\beta }$ defines a locally convex topology on the Schwartz space. For $n=1,$ the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology:[38]
$|f|_{m,k}=\sup _{|p|\leq m}\left(\sup _{x\in \mathbb {R} ^{n}}\left\{(1+|x|)^{k}\left|(\partial ^{\alpha }f)(x)\right|\right\}\right),\qquad k,m\in \mathbb {N} .$
Otherwise, one can define a norm on ${\mathcal {S}}(\mathbb {R} ^{n})$ via
$\|\phi \|_{k}~=~\max _{|\alpha |+|\beta |\leq k}\sup _{x\in \mathbb {R} ^{n}}\left|x^{\alpha }\partial ^{\beta }\phi (x)\right|,\qquad k\geq 1.$
The Schwartz space is a Fréchet space (i.e. a complete metrizable locally convex space). Because the Fourier transform changes $\partial ^{\alpha }$ into multiplication by $x^{\alpha }$ and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function.
A sequence $\{f_{i}\}$ in ${\mathcal {S}}(\mathbb {R} ^{n})$ converges to 0 in ${\mathcal {S}}(\mathbb {R} ^{n})$ if and only if the functions $(1+|x|)^{k}(\partial ^{p}f_{i})(x)$ converge to 0 uniformly in the whole of $\mathbb {R} ^{n},$ which implies that such a sequence must converge to zero in $C^{\infty }(\mathbb {R} ^{n}).$[38]
${\mathcal {D}}(\mathbb {R} ^{n})$ is dense in ${\mathcal {S}}(\mathbb {R} ^{n}).$ The subset of all analytic Schwartz functions is dense in ${\mathcal {S}}(\mathbb {R} ^{n})$ as well.[39]
The Schwartz space is nuclear and the tensor product of two maps induces a canonical surjective TVS-isomorphisms
${\mathcal {S}}(\mathbb {R} ^{m})\ {\widehat {\otimes }}\ {\mathcal {S}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{m+n}),$
where ${\widehat {\otimes }}$ represents the completion of the injective tensor product (which in this case is the identical to the completion of the projective tensor product).[40]
Tempered distributions
The inclusion map $\operatorname {In} :{\mathcal {D}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{n})$ :{\mathcal {D}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{n})} is a continuous injection whose image is dense in its codomain, so the transpose ${}^{t}\operatorname {In} :({\mathcal {S}}(\mathbb {R} ^{n}))'_{b}\to {\mathcal {D}}^{\prime }(\mathbb {R} ^{n})$ :({\mathcal {S}}(\mathbb {R} ^{n}))'_{b}\to {\mathcal {D}}^{\prime }(\mathbb {R} ^{n})} is also a continuous injection. Thus, the image of the transpose map, denoted by ${\mathcal {S}}^{\prime }(\mathbb {R} ^{n}),$ forms a space of distributions when it is endowed with the strong dual topology of $({\mathcal {S}}(\mathbb {R} ^{n}))'_{b}$ (transferred to it via the transpose map ${}^{t}\operatorname {In} :({\mathcal {S}}(\mathbb {R} ^{n}))'_{b}\to {\mathcal {D}}^{\prime }(\mathbb {R} ^{n}),$ :({\mathcal {S}}(\mathbb {R} ^{n}))'_{b}\to {\mathcal {D}}^{\prime }(\mathbb {R} ^{n}),} so the topology of ${\mathcal {S}}^{\prime }(\mathbb {R} ^{n})$ is finer than the subspace topology that this set inherits from ${\mathcal {D}}^{\prime }(\mathbb {R} ^{n})$).
The space ${\mathcal {S}}^{\prime }(\mathbb {R} ^{n})$ is called the space of tempered distributions. It is the continuous dual of the Schwartz space. Equivalently, a distribution T is a tempered distribution if and only if
$\left({\text{ for all }}\alpha ,\beta \in \mathbb {N} ^{n}:\lim _{m\to \infty }p_{\alpha ,\beta }(\phi _{m})=0\right)\Longrightarrow \lim _{m\to \infty }T(\phi _{m})=0.$
The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space $L^{p}(\mathbb {R} ^{n})$ for $p\geq 1$ are tempered distributions.
The tempered distributions can also be characterized as slowly growing, meaning that each derivative of T grows at most as fast as some polynomial. This characterization is dual to the rapidly falling behaviour of the derivatives of a function in the Schwartz space, where each derivative of $\phi $ decays faster than every inverse power of $|x|.$ An example of a rapidly falling function is $|x|^{n}\exp(-\lambda |x|^{\beta })$ for any positive $n,\lambda ,\beta .$
Fourier transform
To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform $F:{\mathcal {S}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{n})$ is a TVS-automorphism of the Schwartz space, and the Fourier transform is defined to be its transpose ${}^{t}F:{\mathcal {S}}^{\prime }(\mathbb {R} ^{n})\to {\mathcal {S}}^{\prime }(\mathbb {R} ^{n}),$ which (abusing notation) will again be denoted by F. So the Fourier transform of the tempered distribution T is defined by $(FT)(\psi )=T(F\psi )$ for every Schwartz function $\psi .$ $FT$ is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that
$F{\dfrac {dT}{dx}}=ixFT$
and also with convolution: if T is a tempered distribution and $\psi $ is a slowly increasing smooth function on $\mathbb {R} ^{n},$ $\psi T$ is again a tempered distribution and
$F(\psi T)=F\psi *FT$
is the convolution of $FT$ and $F\psi $. In particular, the Fourier transform of the constant function equal to 1 is the $\delta $ distribution.
Expressing tempered distributions as sums of derivatives
If $T\in {\mathcal {S}}^{\prime }(\mathbb {R} ^{n})$ is a tempered distribution, then there exists a constant $C>0,$ and positive integers M and N such that for all Schwartz functions $\phi \in {\mathcal {S}}(\mathbb {R} ^{n})$
$\langle T,\phi \rangle \leq C\sum \nolimits _{|\alpha |\leq N,|\beta |\leq M}\sup _{x\in \mathbb {R} ^{n}}\left|x^{\alpha }\partial ^{\beta }\phi (x)\right|=C\sum \nolimits _{|\alpha |\leq N,|\beta |\leq M}p_{\alpha ,\beta }(\phi ).$
This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function F and a multi-index $\alpha $ such that
$T=\partial ^{\alpha }F.$
Restriction of distributions to compact sets
If $T\in {\mathcal {D}}^{\prime }(\mathbb {R} ^{n}),$ then for any compact set $K\subseteq \mathbb {R} ^{n},$ there exists a continuous function F compactly supported in $\mathbb {R} ^{n}$ (possibly on a larger set than K itself) and a multi-index $\alpha $ such that $T=\partial ^{\alpha }F$ on $C_{c}^{\infty }(K).$
Tensor product of distributions
Let $U\subseteq \mathbb {R} ^{m}$ and $V\subseteq \mathbb {R} ^{n}$ be open sets. Assume all vector spaces to be over the field $\mathbb {F} ,$ where $\mathbb {F} =\mathbb {R} $ or $\mathbb {C} .$ For $f\in {\mathcal {D}}(U\times V)$ define for every $u\in U$ and every $v\in V$ the following functions:
${\begin{alignedat}{9}f_{u}:\,&V&&\to \,&&\mathbb {F} &&\quad {\text{ and }}\quad &&f^{v}:\,&&U&&\to \,&&\mathbb {F} \\&y&&\mapsto \,&&f(u,y)&&&&&&x&&\mapsto \,&&f(x,v)\\\end{alignedat}}$
Given $S\in {\mathcal {D}}^{\prime }(U)$ and $T\in {\mathcal {D}}^{\prime }(V),$ define the following functions:
${\begin{alignedat}{9}\langle S,f^{\bullet }\rangle :\,&V&&\to \,&&\mathbb {F} &&\quad {\text{ and }}\quad &&\langle T,f_{\bullet }\rangle :\,&&U&&\to \,&&\mathbb {F} \\&v&&\mapsto \,&&\langle S,f^{v}\rangle &&&&&&u&&\mapsto \,&&\langle T,f_{u}\rangle \\\end{alignedat}}$ :\,&V&&\to \,&&\mathbb {F} &&\quad {\text{ and }}\quad &&\langle T,f_{\bullet }\rangle :\,&&U&&\to \,&&\mathbb {F} \\&v&&\mapsto \,&&\langle S,f^{v}\rangle &&&&&&u&&\mapsto \,&&\langle T,f_{u}\rangle \\\end{alignedat}}}
where $\langle T,f_{\bullet }\rangle \in {\mathcal {D}}(U)$ and $\langle S,f^{\bullet }\rangle \in {\mathcal {D}}(V).$ These definitions associate every $S\in {\mathcal {D}}'(U)$ and $T\in {\mathcal {D}}'(V)$ with the (respective) continuous linear map:
${\begin{alignedat}{9}\,&{\mathcal {D}}(U\times V)&&\to \,&&{\mathcal {D}}(V)&&\quad {\text{ and }}\quad &&\,&&{\mathcal {D}}(U\times V)&&\to \,&&{\mathcal {D}}(U)\\&f&&\mapsto \,&&\langle S,f^{\bullet }\rangle &&&&&&f&&\mapsto \,&&\langle T,f_{\bullet }\rangle \\\end{alignedat}}$
Moreover, if either $S$ (resp. $T$) has compact support then it also induces a continuous linear map of $C^{\infty }(U\times V)\to C^{\infty }(V)$ (resp. $C^{\infty }(U\times V)\to C^{\infty }(U)$).[41]
Fubini's theorem for distributions[41] — Let $S\in {\mathcal {D}}'(U)$ and $T\in {\mathcal {D}}'(V).$ If $f\in {\mathcal {D}}(U\times V)$ then
$\langle S,\langle T,f_{\bullet }\rangle \rangle =\langle T,\langle S,f^{\bullet }\rangle \rangle .$
The tensor product of $S\in {\mathcal {D}}'(U)$ and $T\in {\mathcal {D}}'(V),$ denoted by $S\otimes T$ or $T\otimes S,$ is the distribution in $U\times V$ defined by:[41]
$(S\otimes T)(f):=\langle S,\langle T,f_{\bullet }\rangle \rangle =\langle T,\langle S,f^{\bullet }\rangle \rangle .$
Schwartz kernel theorem
The tensor product defines a bilinear map
${\begin{alignedat}{4}\,&{\mathcal {D}}^{\prime }(U)\times {\mathcal {D}}^{\prime }(V)&&\to \,&&{\mathcal {D}}^{\prime }(U\times V)\\&~~~~~~~~(S,T)&&\mapsto \,&&S\otimes T\\\end{alignedat}}$
the span of the range of this map is a dense subspace of its codomain. Furthermore, $\operatorname {supp} (S\otimes T)=\operatorname {supp} (S)\times \operatorname {supp} (T).$[41] Moreover $(S,T)\mapsto S\otimes T$ induces continuous bilinear maps:
${\begin{alignedat}{8}&{\mathcal {E}}^{\prime }(U)&&\times {\mathcal {E}}^{\prime }(V)&&\to {\mathcal {E}}^{\prime }(U\times V)\\&{\mathcal {S}}^{\prime }(\mathbb {R} ^{m})&&\times {\mathcal {S}}^{\prime }(\mathbb {R} ^{n})&&\to {\mathcal {S}}^{\prime }(\mathbb {R} ^{m+n})\\\end{alignedat}}$
where ${\mathcal {E}}'$ denotes the space of distributions with compact support and ${\mathcal {S}}$ is the Schwartz space of rapidly decreasing functions.[14]
Schwartz kernel theorem[40] — Each of the canonical maps below (defined in the natural way) are TVS isomorphisms:
${\begin{alignedat}{8}&{\mathcal {S}}^{\prime }(\mathbb {R} ^{m+n})~&&~\cong ~&&~{\mathcal {S}}^{\prime }(\mathbb {R} ^{m})\ &&{\widehat {\otimes }}\ {\mathcal {S}}^{\prime }(\mathbb {R} ^{n})~&&~\cong ~&&~L_{b}({\mathcal {S}}(\mathbb {R} ^{m});&&\;{\mathcal {S}}^{\prime }(\mathbb {R} ^{n}))\\&{\mathcal {E}}^{\prime }(U\times V)~&&~\cong ~&&~{\mathcal {E}}^{\prime }(U)\ &&{\widehat {\otimes }}\ {\mathcal {E}}^{\prime }(V)~&&~\cong ~&&~L_{b}(C^{\infty }(U);&&\;{\mathcal {E}}^{\prime }(V))\\&{\mathcal {D}}^{\prime }(U\times V)~&&~\cong ~&&~{\mathcal {D}}^{\prime }(U)\ &&{\widehat {\otimes }}\ {\mathcal {D}}^{\prime }(V)~&&~\cong ~&&~L_{b}({\mathcal {D}}(U);&&\;{\mathcal {D}}^{\prime }(V))\\\end{alignedat}}$
Here ${\widehat {\otimes }}$ represents the completion of the injective tensor product (which in this case is identical to the completion of the projective tensor product, since these spaces are nuclear) and $L_{b}(X;Y)$ has the topology of uniform convergence on bounded subsets.
This result does not hold for Hilbert spaces such as $L^{2}$ and its dual space.[42] Why does such a result hold for the space of distributions and test functions but not for other "nice" spaces like the Hilbert space $L^{2}$? This question led Alexander Grothendieck to discover nuclear spaces, nuclear maps, and the injective tensor product. He ultimately showed that it is precisely because ${\mathcal {D}}(U)$ is a nuclear space that the Schwartz kernel theorem holds. Like Hilbert spaces, nuclear spaces may be thought as of generalizations of finite dimensional Euclidean space.
Using holomorphic functions as test functions
The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals.
See also
• Colombeau algebra – commutative associative differential algebra of generalized functions into which smooth functions (but not arbitrary continuous ones) embed as a subalgebra and distributions embed as a subspacePages displaying wikidata descriptions as a fallback
• Current (mathematics) – Distributions on spaces of differential forms
• Distribution (number theory) – function on finite sets which is analogous to an integralPages displaying wikidata descriptions as a fallback
• Distribution on a linear algebraic group – Linear function satisfying a support condition
• Gelfand triple – Construction linking the study of "bound" and continuous eigenvalues in functional analysisPages displaying short descriptions of redirect targets
• Generalized function – Objects extending the notion of functions
• Homogeneous distribution
• Hyperfunction – Type of generalized function
• Laplacian of the indicator – Limit of sequence of smooth functions
• Limit of a distribution
• Linear form – Linear map from a vector space to its field of scalars
• Malgrange–Ehrenpreis theorem
• Pseudodifferential operator – Type of differential operatorPages displaying short descriptions of redirect targets
• Riesz representation theorem – Theorem about the dual of a Hilbert space
• Vague topology
• Weak solution – Mathematical solution
Notes
1. The Schwartz space consists of smooth rapidly decreasing test functions, where "rapidly decreasing" means that the function decreases faster than any polynomial increases as points in its domain move away from the origin.
2. Except for the trivial (i.e. identically $0$) map, which of course is always analytic.
3. Note that $i$ being an integer implies $i\neq \infty .$ This is sometimes expressed as $0\leq i<k+1.$ Since $\infty +1=\infty ,$ the inequality "$0\leq i<k+1$" means: $0\leq i<\infty $ if $k=\infty ,$ while if $k\neq \infty $ then it means $0\leq i\leq k.$
4. The image of the compact set $K$ under a continuous $\mathbb {R} $-valued map (for example, $x\mapsto \left|\partial ^{p}f(x)\right|$ for $x\in U$) is itself a compact, and thus bounded, subset of $\mathbb {R} .$ If $K\neq \varnothing $ then this implies that each of the functions defined above is $\mathbb {R} $-valued (that is, none of the supremums above are ever equal to $\infty $).
5. If we take $\mathbb {K} $ to be the set of all compact subsets of U then we can use the universal property of direct limits to conclude that the inclusion $\operatorname {In} _{K}^{U}:C^{k}(K)\to C_{c}^{k}(U)$ is a continuous and even that they are topological embedding for every compact subset $K\subseteq U.$ If however, we take $\mathbb {K} $ to be the set of closures of some countable increasing sequence of relatively compact open subsets of U having all of the properties mentioned earlier in this in this article then we immediately deduce that $C_{c}^{k}(U)$ is a Hausdorff locally convex strict LF-space (and even a strict LB-space when $k\neq \infty $). All of these facts can also be proved directly without using direct systems (although with more work).
6. For any TVS X (metrizable or otherwise), the notion of completeness depends entirely on a certain so-called "canonical uniformity" that is defined using only the subtraction operation (see the article Complete topological vector space for more details). In this way, the notion of a complete TVS does not require the existence of any metric. However, if the TVS X is metrizable and if $d$ is any translation-invariant metric on X that defines its topology, then X is complete as a TVS (i.e. it is a complete uniform space under its canonical uniformity) if and only if $(X,d)$ is a complete metric space. So if a TVS X happens to have a topology that can be defined by such a metric d then d may be used to deduce the completeness of X but the existence of such a metric is not necessary for defining completeness and it is even possible to deduce that a metrizable TVS is complete without ever even considering a metric (e.g. since the Cartesian product of any collection of complete TVSs is again a complete TVS, we can immediately deduce that the TVS $\mathbb {R} ^{\mathbb {N} },$ which happens to be metrizable, is a complete TVS; note that there was no need to consider any metric on $\mathbb {R} ^{\mathbb {N} }$).
7. One reason for giving $C_{c}^{\infty }(U)$ the canonical LF topology is because it is with this topology that $C_{c}^{\infty }(U)$ and its continuous dual space both become nuclear spaces, which have many nice properties and which may be viewed as a generalization of finite-dimensional spaces (for comparison, normed spaces are another generalization of finite-dimensional spaces that have many "nice" properties). In more detail, there are two classes of topological vector spaces (TVSs) that are particularly similar to finite-dimensional Euclidean spaces: the Banach spaces (especially Hilbert spaces) and the nuclear Montel spaces. Montel spaces are a class of TVSs in which every closed and bounded subset is compact (this generalizes the Heine–Borel theorem), which is a property that no infinite-dimensional Banach space can have; that is, no infinite-dimensional TVS can be both a Banach space and a Montel space. Also, no infinite-dimensional TVS can be both a Banach space and a nuclear space. All finite dimensional Euclidean spaces are nuclear Montel Hilbert spaces but once one enters infinite-dimensional space then these two classes separate. Nuclear spaces in particular have many of the "nice" properties of finite-dimensional TVSs (e.g. the Schwartz kernel theorem) that infinite-dimensional Banach spaces lack (for more details, see the properties, sufficient conditions, and characterizations given in the article Nuclear space). It is in this sense that nuclear spaces are an "alternative generalization" of finite-dimensional spaces. Also, as a general rule, in practice most "naturally occurring" TVSs are usually either Banach spaces or nuclear space. Typically, most TVSs that are associated with smoothness (i.e. infinite differentiability, such as $C_{c}^{\infty }(U)$ and $C^{\infty }(U)$) end up being nuclear TVSs while TVSs associated with finite continuous differentiability (such as $C^{k}(K)$ with K compact and $k\neq \infty $) often end up being non-nuclear spaces, such as Banach spaces.
8. Even though the topology of $C_{c}^{\infty }(U)$ is not metrizable, a linear functional on $C_{c}^{\infty }(U)$ is continuous if and only if it is sequentially continuous.
9. A null sequence is a sequence that converges to the origin.
10. A sequence $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }$ is said to be Mackey convergent to 0 in $X,$ if there exists a divergent sequence $r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }\to \infty $ of positive real number such that $\left(r_{i}x_{i}\right)_{i=1}^{\infty }$ is a bounded set in $X.$
11. If ${\mathcal {P}}$ is also a directed set under the usual function comparison then we can take the finite collection to consist of a single element.
12. In functional analysis, the strong dual topology is often the "standard" or "default" topology placed on the continuous dual space $X',$ where if X is a normed space then this strong dual topology is the same as the usual norm-induced topology on $X'.$
13. Technically, the topology must be coarser than the strong dual topology and also simultaneously be finer that the weak* topology.
14. Recall that a linear map is bounded if and only if it maps null sequences to bounded sequences.
15. For more information on such class of functions, see the entry on locally integrable functions.
References
1. Trèves 2006, pp. 222–223.
2. See for example Grubb 2009, p. 14.
3. Trèves 2006, pp. 85–89.
4. Trèves 2006, pp. 142–149.
5. Trèves 2006, pp. 356–358.
6. Rudin 1991, pp. 149–181.
7. Trèves 2006, pp. 131–134.
8. Trèves 2006, pp. 245–247.
9. Trèves 2006, pp. 247–252.
10. Trèves 2006, pp. 126–134.
11. Trèves 2006, pp. 136–148.
12. Rudin 1991, pp. 149–155.
13. Narici & Beckenstein 2011, pp. 446–447.
14. Trèves 2006, p. 423.
15. See for example Schaefer & Wolff 1999, p. 173.
16. Gabriyelyan, Saak "Topological properties of Strict LF-spaces and strong duals of Montel Strict LF-spaces" (2017)
17. Gabriyelyan, S.S. Kakol J., and·Leiderman, A. "The strong Pitkeev property for topological groups and topological vector spaces"
18. Trèves 2006, pp. 195–201.
19. Narici & Beckenstein 2011, p. 435.
20. Trèves 2006, pp. 526–534.
21. Trèves 2006, p. 357.
22. "Topological vector space". Encyclopedia of Mathematics. Retrieved September 6, 2020. It is a Montel space, hence paracompact, and so normal.
23. T. Shirai, Sur les Topologies des Espaces de L. Schwartz, Proc. Japan Acad. 35 (1959), 31-36.
24. According to Gel'fand & Shilov 1966–1968, v. 1, §1.2
25. Trèves 2006, pp. 351–359.
26. Narici & Beckenstein 2011, pp. 441–457.
27. Trèves 2006, pp. 150–160.
28. Trèves 2006, pp. 300–304.
29. Strichartz 1994, §2.3; Trèves 2006.
30. Trèves 2006, pp. 131–135.
31. Trèves 2006, pp. 240–245.
32. Trèves 2006, pp. 240–252.
33. Trèves 2006, pp. 262–264.
34. Trèves 2006, p. 218.
35. Trèves 2006, pp. 255–257.
36. Trèves 2006, pp. 258–264.
37. Rudin 1991, pp. 177–181.
38. Trèves 2006, pp. 92–94.
39. Trèves 2006, pp. 160.
40. Trèves 2006, p. 531.
41. Trèves 2006, pp. 416–419.
42. Trèves 2006, pp. 509–510.
Bibliography
• Barros-Neto, José (1973). An Introduction to the Theory of Distributions. New York, NY: Dekker.
• Benedetto, J.J. (1997), Harmonic Analysis and Applications, CRC Press.
• Folland, G.B. (1989). Harmonic Analysis in Phase Space. Princeton, NJ: Princeton University Press.
• Friedlander, F.G.; Joshi, M.S. (1998). Introduction to the Theory of Distributions. Cambridge, UK: Cambridge University Press..
• Gårding, L. (1997), Some Points of Analysis and their History, American Mathematical Society.
• Gel'fand, I.M.; Shilov, G.E. (1966–1968), Generalized functions, vol. 1–5, Academic Press.
• Grubb, G. (2009), Distributions and Operators, Springer.
• Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 3-540-12104-8, MR 0717035.
• Horváth, John (1966). Topological Vector Spaces and Distributions. Addison-Wesley series in mathematics. Vol. 1. Reading, MA: Addison-Wesley Publishing Company. ISBN 978-0201029857.
• Kolmogorov, Andrey; Fomin, Sergei V. (1957). Elements of the Theory of Functions and Functional Analysis. Dover Books on Mathematics. New York: Dover Books. ISBN 978-1-61427-304-2. OCLC 912495626.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Petersen, Bent E. (1983). Introduction to the Fourier Transform and Pseudo-Differential Operators. Boston, MA: Pitman Publishing..
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Schwartz, Laurent (1954), "Sur l'impossibilité de la multiplications des distributions", C. R. Acad. Sci. Paris, 239: 847–848.
• Schwartz, Laurent (1951), Théorie des distributions, vol. 1–2, Hermann.
• Sobolev, S.L. (1936), "Méthode nouvelle à résoudre le problème de Cauchy pour les équations linéaires hyperboliques normales", Mat. Sbornik, 1: 39–72.
• Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 0-691-08078-X.
• Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 0-8493-8273-4.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
• Woodward, P.M. (1953). Probability and Information Theory with Applications to Radar. Oxford, UK: Pergamon Press.
Further reading
• M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. ISBN 0-521-09128-4 (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals)
• V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 0-415-27356-0
• Vladimirov, V.S. (2001) [1994], "Generalized function", Encyclopedia of Mathematics, EMS Press.
• Vladimirov, V.S. (2001) [1994], "Generalized functions, space of", Encyclopedia of Mathematics, EMS Press.
• Vladimirov, V.S. (2001) [1994], "Generalized function, derivative of a", Encyclopedia of Mathematics, EMS Press.
• Vladimirov, V.S. (2001) [1994], "Generalized functions, product of", Encyclopedia of Mathematics, EMS Press.
• Oberguggenberger, Michael (2001) [1994], "Generalized function algebras", Encyclopedia of Mathematics, EMS Press.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Spaghetti plot
A spaghetti plot (also known as a spaghetti chart, spaghetti diagram, or spaghetti model) is a method of viewing data to visualize possible flows through systems. Flows depicted in this manner appear like noodles, hence the coining of this term.[1] This method of statistics was first used to track routing through factories. Visualizing flow in this manner can reduce inefficiency within the flow of a system. In regards to animal populations and weather buoys drifting through the ocean, they are drawn to study distribution and migration patterns. Within meteorology, these diagrams can help determine confidence in a specific weather forecast, as well as positions and intensities of high and low pressure systems. They are composed of deterministic forecasts from atmospheric models or their various ensemble members. Within medicine, they can illustrate the effects of drugs on patients during drug trials.
Applications
Biology
Spaghetti diagrams have been used to study why butterflies are found where they are, and to see how topographic features (such as mountain ranges) limit their migration and range.[2] Within mammal distributions across central North America, these plots have correlated their edges to regions which were glaciated within the previous ice age, as well as certain types of vegetation.[3]
Meteorology
Within meteorology, spaghetti diagrams are normally drawn from ensemble forecasts. A meteorological variable e.g. pressure, temperature, or precipitation amount is drawn on a chart for a number of slightly different model runs from an ensemble. The model can then be stepped forward in time and the results compared and be used to gauge the amount of uncertainty in the forecast. If there is good agreement and the contours follow a recognizable pattern through the sequence, then the confidence in the forecast can be high. Conversely, if the pattern is chaotic, i.e., resembling a plate of spaghetti, then confidence will be low. Ensemble members will generally diverge over time and spaghetti plots are a quick way to see when this happens.
Spaghetti plots can be a more favorable choice compared to the mean-spread ensemble in determining the intensity of a coming cyclone, anticyclone, or upper-level ridge or trough. Because ensemble forecasts naturally diverge as the days progress, the projected locations of meteorological features will spread further apart. A mean-spread diagram will take a mean of the calculated pressure from each spot on the map as calculated by each permutation in the ensemble, thus effectively smoothing out the projected low and making it appear broader in size but weaker in intensity than the ensemble's permutations had actually indicated. It can also depict two features instead of one if the ensemble clustering is around two different solutions.[4]
Various forecast models within tropical cyclone track forecasting can be plotted on a spaghetti diagram to show confidence in five-day track forecasts.[5] When track models diverge late in the forecast period, the plot takes on the shape of a squashed spider, and can be referred to as such in National Hurricane Center discussions.[6] Within the field of climatology and paleotempestology, spaghetti plots have been used to correlate ground temperature information derived from boreholes across central and eastern Canada.[7] As in other disciplines, spaghetti diagrams can be used to show the motion of objects, such as drifting weather buoys over time.[8]
Business
Spaghetti diagrams were first used to track routing through a factory.[9] Spaghetti plots are a simple tool to visualize movement and transportation.[10] Analyzing flows through systems can determine where time and energy is wasted, and identifies where streamlining would be beneficial.[1] This is true not only with physical travel through a physical place, but also during more abstract processes such as the application of a mortgage loan.[11]
Medicine
Spaghetti plots can be used to track the results of drug trials amongst a number of patients on one individual graph to determine their benefit.[12] They have also been used to correlate progesterone levels to early pregnancy loss.[13] The half-life of drugs within people's blood plasma, as well as discriminating effects between different populations, can be diagnosed quickly via these diagrams.[14]
References
1. Theodore T. Allen (2010). Introduction to Engineering Statistics and Lean Sigma: Statistical Quality Control and Design of Experiments and Systems. Springer. p. 128. ISBN 978-1-84882-999-2.
2. James A. Scott (1992). The Butterflies of North America: A Natural History and Field Guide. Stanford University Press. p. 103. ISBN 978-0-8047-2013-7.
3. J. Knox Jones; Elmer C. Birney (1988). Handbook of mammals of the north-central states. University of Minnesota Press. pp. 52–55. ISBN 978-0-8166-1420-2.
4. Environmental Modeling Center (2003-08-21). "NCEP Medium-Range Ensemble Forecast (MREF) System Spaghetti Diagrams". National Oceanic and Atmospheric Administration. Retrieved 2011-02-17.
5. Ivor Van Heerden; Mike Bryan (2007). The storm: what went wrong and why during hurricane Katrina : the inside story from one Louisiana scientist. Penguin. ISBN 978-0-14-311213-6.
6. John L. Beven, III (2007-05-30). "Tropical Depression Two-E Discussion Number 3". National Hurricane Center. Retrieved 2011-02-17.
7. Louise Bodri; Vladimír Čermák (2007). Borehole climatology: a new method on how to reconstruct climate. Elsevier. p. 76. ISBN 978-0-08-045320-0.
8. S. A. Thorpe (2005). The turbulent ocean. Cambridge University Press. p. 341. ISBN 978-0-521-83543-5.
9. William A. Levinson (2007). Beyond the theory of constraints: how to eliminate variation and maximize capacity. Productivity Press. p. 97. ISBN 978-1-56327-370-4.
10. Lonnie Wilson (2009). How to Implement Lean Manufacturing. McGraw Hill Professional. p. 127. ISBN 978-0-07-162507-4.
11. Rangaraj (2009). Supply Chain Management For Competitive Advantage. Tata McGraw-Hill. p. 130. ISBN 978-0-07-022163-5.
12. Donald R. Hedeker; Robert D. Gibbons (2006). Longitudinal data analysis. John Wiley and Sons. pp. 52–54. ISBN 978-0-471-42027-9.
13. Hulin Wu; Jin-Ting Zhang (2006). Nonparametric regression methods for longitudinal data analysis. John Wiley and Sons. pp. 2–4. ISBN 978-0-471-48350-2.
14. Johan Gabrielsson; Daniel Weiner (2001). Pharmacokinetic/pharmacodynamic data analysis: concepts and applications, Volume 1. Taylor & Francis. pp. 263–264. ISBN 978-91-86274-92-4.
External links
Wikimedia Commons has media related to Spaghetti plots.
• TIGGE Project at NCAR
Visualization of technical information
Fields
• Biological data visualization
• Chemical imaging
• Crime mapping
• Data visualization
• Educational visualization
• Flow visualization
• Geovisualization
• Information visualization
• Mathematical visualization
• Medical imaging
• Molecular graphics
• Product visualization
• Scientific visualization
• Software visualization
• Technical drawing
• User interface design
• Visual culture
• Volume visualization
Image types
• Chart
• Diagram
• Engineering drawing
• Graph of a function
• Ideogram
• Map
• Photograph
• Pictogram
• Plot
• Sankey diagram
• Schematic
• Skeletal formula
• Statistical graphics
• Table
• Technical drawings
• Technical illustration
People
Pre-19th century
• Edmond Halley
• Charles-René de Fourcroy
• Joseph Priestley
• Gaspard Monge
19th century
• Charles Dupin
• Adolphe Quetelet
• André-Michel Guerry
• William Playfair
• August Kekulé
• Charles Joseph Minard
• Luigi Perozzo
• Francis Amasa Walker
• John Venn
• Oliver Byrne
• Matthew Sankey
• Charles Booth
• Georg von Mayr
• John Snow
• Florence Nightingale
• Karl Wilhelm Pohlke
• Toussaint Loua
• Francis Galton
Early 20th century
• Edward Walter Maunder
• Otto Neurath
• W. E. B. Du Bois
• Henry Gantt
• Arthur Lyon Bowley
• Howard G. Funkhouser
• John B. Peddle
• Ejnar Hertzsprung
• Henry Norris Russell
• Max O. Lorenz
• Fritz Kahn
• Harry Beck
• Erwin Raisz
Mid 20th century
• Jacques Bertin
• Rudolf Modley
• Arthur H. Robinson
• John Tukey
• Mary Eleanor Spear
• Edgar Anderson
• Howard T. Fisher
Late 20th century
• Borden Dent
• Nigel Holmes
• William S. Cleveland
• George G. Robertson
• Bruce H. McCormick
• Catherine Plaisant
• Stuart Card
• Pat Hanrahan
• Edward Tufte
• Ben Shneiderman
• Michael Friendly
• Howard Wainer
• Clifford A. Pickover
• Lawrence J. Rosenblum
• Thomas A. DeFanti
• George Furnas
• Sheelagh Carpendale
• Cynthia Brewer
• Miriah Meyer
• Jock D. Mackinlay
• Alan MacEachren
• David Goodsell
• Michael Maltz
• Leland Wilkinson
• Alfred Inselberg
Early 21st century
• Ben Fry
• Hans Rosling
• Christopher R. Johnson
• David McCandless
• Mauro Martino
• John Maeda
• Tamara Munzner
• Jeffrey Heer
• Gordon Kindlmann
• Hanspeter Pfister
• Manuel Lima
• Aaron Koblin
• Martin Krzywinski
• Bang Wong
• Jessica Hullman
• Hadley Wickham
• Polo Chau
• Fernanda Viégas
• Martin Wattenberg
• Claudio Silva
• Ade Olufeko
• Moritz Stefaner
Related topics
• Cartography
• Chartjunk
• Computer graphics
• in computer science
• CPK coloring
• Graph drawing
• Graphic design
• Graphic organizer
• Imaging science
• Information graphics
• Information science
• Misleading graph
• Neuroimaging
• Patent drawing
• Scientific modelling
• Spatial analysis
• Visual analytics
• Visual perception
• Volume cartography
• Volume rendering
• Information art
|
Wikipedia
|
Spaltenstein variety
In algebraic geometry, a Spaltenstein variety is a variety given by the fixed point set of a nilpotent transformation on a flag variety. They were introduced by Nicolas Spaltenstein (1976, 1982). In the special case of full flag varieties the Spaltenstein varieties are Springer varieties.
References
• Spaltenstein, N. (1976), "The fixed point set of a unipotent transformation on the flag manifold", Indagationes Mathematicae, 38 (5): 452–456, MR 0485901
• Spaltenstein, Nicolas (1982), Classes unipotentes et sous-groupes de Borel, Lecture Notes in Mathematics, vol. 946, Berlin, New York: Springer-Verlag, ISBN 978-3-540-11585-4, MR 0672610
|
Wikipedia
|
Span (category theory)
In category theory, a span, roof or correspondence is a generalization of the notion of relation between two objects of a category. When the category has all pullbacks (and satisfies a small number of other conditions), spans can be considered as morphisms in a category of fractions.
The notion of a span is due to Nobuo Yoneda (1954) and Jean Bénabou (1967).
Formal definition
A span is a diagram of type $\Lambda =(-1\leftarrow 0\rightarrow +1),$ i.e., a diagram of the form $Y\leftarrow X\rightarrow Z$.
That is, let Λ be the category (-1 ← 0 → +1). Then a span in a category C is a functor S : Λ → C. This means that a span consists of three objects X, Y and Z of C and morphisms f : X → Y and g : X → Z: it is two maps with common domain.
The colimit of a span is a pushout.
Examples
• If R is a relation between sets X and Y (i.e. a subset of X × Y), then X ← R → Y is a span, where the maps are the projection maps $X\times Y{\overset {\pi _{X}}{\to }}X$ and $X\times Y{\overset {\pi _{Y}}{\to }}Y$.
• Any object yields the trivial span A ← A → A, where the maps are the identity.
• More generally, let $\phi \colon A\to B$ be a morphism in some category. There is a trivial span A ← A → B, where the left map is the identity on A, and the right map is the given map φ.
• If M is a model category, with W the set of weak equivalences, then the spans of the form $X\leftarrow Y\rightarrow Z,$ where the left morphism is in W, can be considered a generalised morphism (i.e., where one "inverts the weak equivalences"). Note that this is not the usual point of view taken when dealing with model categories.
Cospans
A cospan K in a category C is a functor K : Λop → C; equivalently, a contravariant functor from Λ to C. That is, a diagram of type $\Lambda ^{\text{op}}=(-1\rightarrow 0\leftarrow +1),$ i.e., a diagram of the form $Y\rightarrow X\leftarrow Z$.
Thus it consists of three objects X, Y and Z of C and morphisms f : Y → X and g : Z → X: it is two maps with common codomain.
The limit of a cospan is a pullback.
An example of a cospan is a cobordism W between two manifolds M and N, where the two maps are the inclusions into W. Note that while cobordisms are cospans, the category of cobordisms is not a "cospan category": it is not the category of all cospans in "the category of manifolds with inclusions on the boundary", but rather a subcategory thereof, as the requirement that M and N form a partition of the boundary of W is a global constraint.
The category nCob of finite-dimensional cobordisms is a dagger compact category. More generally, the category Span(C) of spans on any category C with finite limits is also dagger compact.
See also
• Binary relation
• Pullback (category theory)
• Pushout (category theory)
• Cobordism
References
• span at the nLab
• Yoneda, Nobuo, On the homology theory of modules. J. Fac. Sci. Univ. Tokyo Sect. I,7 (1954), 193–227.
• Bénabou, Jean, Introduction to Bicategories, Lecture Notes in Mathematics 47, Springer (1967), pp.1-77
|
Wikipedia
|
Linear span
In mathematics, the linear span (also called the linear hull[1] or just span) of a set S of vectors (from a vector space), denoted span(S),[2] is defined as the set of all linear combinations of the vectors in S.[3] For example, two linearly independent vectors span a plane. The linear span can be characterized either as the intersection of all linear subspaces that contain S, or as the smallest subspace containing S. The linear span of a set of vectors is therefore a vector space itself. Spans can be generalized to matroids and modules.
To express that a vector space V is a linear span of a subset S, one commonly uses the following phrases—either: S spans V, S is a spanning set of V, V is spanned/generated by S, or S is a generator or generator set of V.
Definition
Given a vector space V over a field K, the span of a set S of vectors (not necessarily finite) is defined to be the intersection W of all subspaces of V that contain S. W is referred to as the subspace spanned by S, or by the vectors in S. Conversely, S is called a spanning set of W, and we say that S spans W.
Alternatively, the span of S may be defined as the set of all finite linear combinations of elements (vectors) of S, which follows from the above definition.[4][5][6][7]
$\operatorname {span} (S)=\left\{{\left.\sum _{i=1}^{k}\lambda _{i}\mathbf {v} _{i}\;\right|\;k\in \mathbb {N} ,\mathbf {v} _{i}\in S,\lambda _{i}\in K}\right\}.$
In the case of infinite S, infinite linear combinations (i.e. where a combination may involve an infinite sum, assuming that such sums are defined somehow as in, say, a Banach space) are excluded by the definition; a generalization that allows these is not equivalent.
Examples
The real vector space $\mathbb {R} ^{3}$ has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of $\mathbb {R} ^{3}$.
Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, 1⁄2, 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent.
The set {(1, 0, 0), (0, 1, 0), (1, 1, 0)} is not a spanning set of $\mathbb {R} ^{3}$, since its span is the space of all vectors in $\mathbb {R} ^{3}$ whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not $\mathbb {R} ^{3}.$ It can be identified with $\mathbb {R} ^{2}$ by removing the third components equal to zero.
The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in $\mathbb {R} ^{3}$, and {(0, 0, 0)} is the intersection of all of these vector spaces.
The set of monomials xn, where n is a non-negative integer, spans the space of polynomials.
Theorems
Equivalence of definitions
The set of all linear combinations of a subset S of V, a vector space over K, is the smallest linear subspace of V containing S.
Proof. We first prove that span S is a subspace of V. Since S is a subset of V, we only need to prove the existence of a zero vector 0 in span S, that span S is closed under addition, and that span S is closed under scalar multiplication. Letting $S=\{\mathbf {v} _{1},\mathbf {v} _{2},\ldots ,\mathbf {v} _{n}\}$, it is trivial that the zero vector of V exists in span S, since $\mathbf {0} =0\mathbf {v} _{1}+0\mathbf {v} _{2}+\cdots +0\mathbf {v} _{n}$. Adding together two linear combinations of S also produces a linear combination of S: $(\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n})+(\mu _{1}\mathbf {v} _{1}+\cdots +\mu _{n}\mathbf {v} _{n})=(\lambda _{1}+\mu _{1})\mathbf {v} _{1}+\cdots +(\lambda _{n}+\mu _{n})\mathbf {v} _{n}$, where all $\lambda _{i},\mu _{i}\in K$, and multiplying a linear combination of S by a scalar $c\in K$ will produce another linear combination of S: $c(\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n})=c\lambda _{1}\mathbf {v} _{1}+\cdots +c\lambda _{n}\mathbf {v} _{n}$. Thus span S is a subspace of V.
Suppose that W is a linear subspace of V containing S. It follows that $S\subseteq \operatorname {span} S$, since every vi is a linear combination of S (trivially). Since W is closed under addition and scalar multiplication, then every linear combination $\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n}$ must be contained in W. Thus, span S is contained in every subspace of V containing S, and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of S.
Size of spanning set is at least size of linearly independent set
Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V.
Proof. Let $S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{m}\}$ be a spanning set and $W=\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{n}\}$ be a linearly independent set of vectors from V. We want to show that $m\geq n$.
Since S spans V, then $S\cup \{\mathbf {w} _{1}\}$ must also span V, and $\mathbf {w} _{1}$ must be a linear combination of S. Thus $S\cup \{\mathbf {w} _{1}\}$ is linearly dependent, and we can remove one vector from S that is a linear combination of the other elements. This vector cannot be any of the wi, since W is linearly independent. The resulting set is $\{\mathbf {w} _{1},\mathbf {v} _{1},\ldots ,\mathbf {v} _{i-1},\mathbf {v} _{i+1},\ldots ,\mathbf {v} _{m}\}$, which is a spanning set of V. We repeat this step n times, where the resulting set after the pth step is the union of $\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{p}\}$ and m - p vectors of S.
It is ensured until the nth step that there will always be some v_i to remove out of S for every adjoint of v, and thus there are at least as many vi's as there are wi's—i.e. $m\geq n$. To verify this, we assume by way of contradiction that $m<n$. Then, at the mth step, we have the set $\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}$ and we can adjoin another vector $\mathbf {w} _{m+1}$. But, since $\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}$ is a spanning set of V, $\mathbf {w} _{m+1}$ is a linear combination of $\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}$. This is a contradiction, since W is linearly independent.
Spanning set can be reduced to a basis
Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V, by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension. This also indicates that a basis is a minimal spanning set when V is finite-dimensional.
Generalizations
Generalizing the definition of the span of points in space, a subset X of the ground set of a matroid is called a spanning set if the rank of X equals the rank of the entire ground set.
The vector space definition can also be generalized to modules.[8][9] Given an R-module A and a collection of elements a1, ..., an of A, the submodule of A spanned by a1, ..., an is the sum of cyclic modules
$Ra_{1}+\cdots +Ra_{n}=\left\{\sum _{k=1}^{n}r_{k}a_{k}{\bigg |}r_{k}\in R\right\}$
consisting of all R-linear combinations of the elements ai. As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset.
Closed linear span (functional analysis)
In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set.
Suppose that X is a normed vector space and let E be any non-empty subset of X. The closed linear span of E, denoted by ${\overline {\operatorname {Sp} }}(E)$ or ${\overline {\operatorname {Span} }}(E)$, is the intersection of all the closed linear subspaces of X which contain E.
One mathematical formulation of this is
${\overline {\operatorname {Sp} }}(E)=\{u\in X|\forall \varepsilon >0\,\exists x\in \operatorname {Sp} (E):\|x-u\|<\varepsilon \}.$
The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials.
Notes
The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.
Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma).
A useful lemma
Let X be a normed space and let E be any non-empty subset of X. Then
1. ${\overline {\operatorname {Sp} }}(E)$ is a closed linear subspace of X which contains E,
2. ${\overline {\operatorname {Sp} }}(E)={\overline {\operatorname {Sp} (E)}}$, viz. ${\overline {\operatorname {Sp} }}(E)$ is the closure of $\operatorname {Sp} (E)$,
3. $E^{\perp }=(\operatorname {Sp} (E))^{\perp }=\left({\overline {\operatorname {Sp} (E)}}\right)^{\perp }.$
(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)
See also
• Affine hull
• Conical combination
• Convex hull
Citations
1. Encyclopedia of Mathematics (2020). Linear Hull.
2. Axler (2015) pp. 29-30, §§ 2.5, 2.8
3. Axler (2015) p. 29, § 2.7
4. Hefferon (2020) p. 100, ch. 2, Definition 2.13
5. Axler (2015) pp. 29-30, §§ 2.5, 2.8
6. Roman (2005) pp. 41-42
7. MathWorld (2021) Vector Space Span.
8. Roman (2005) p. 96, ch. 4
9. Lane & Birkhoff (1999) p. 193, ch. 6
Sources
Textbooks
• Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
• Hefferon, Jim (2020). Linear Algebra (4th ed.). Orthogonal Publishing. ISBN 978-1-944325-11-4.
• Lane, Saunders Mac; Birkhoff, Garrett (1999) [1988]. Algebra (3rd ed.). AMS Chelsea Publishing. ISBN 978-0821816462.
• Roman, Steven (2005). Advanced Linear Algebra (2nd ed.). Springer. ISBN 0-387-24766-1.
• Rynne, Brian P.; Youngson, Martin A. (2008). Linear Functional Analysis. Springer. ISBN 978-1848000049.
• Lay, David C. (2021) Linear Algebra and Its Applications (6th Edition). Pearson.
Web
• Lankham, Isaiah; Nachtergaele, Bruno; Schilling, Anne (13 February 2010). "Linear Algebra - As an Introduction to Abstract Mathematics" (PDF). University of California, Davis. Retrieved 27 September 2011.
• Weisstein, Eric Wolfgang. "Vector Space Span". MathWorld. Retrieved 16 Feb 2021.{{cite web}}: CS1 maint: url-status (link)
• "Linear hull". Encyclopedia of Mathematics. 5 April 2020. Retrieved 16 Feb 2021.{{cite web}}: CS1 maint: url-status (link)
External links
• Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org.
• Sanderson, Grant (August 6, 2016). "Linear combinations, span, and basis vectors". Essence of Linear Algebra. Archived from the original on 2021-12-11 – via YouTube.
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
|
Wikipedia
|
Spanier–Whitehead duality
In mathematics, Spanier–Whitehead duality is a duality theory in homotopy theory, based on a geometrical idea that a topological space X may be considered as dual to its complement in the n-sphere, where n is large enough. Its origins lie in Alexander duality theory, in homology theory, concerning complements in manifolds. The theory is also referred to as S-duality, but this can now cause possible confusion with the S-duality of string theory. It is named for Edwin Spanier and J. H. C. Whitehead, who developed it in papers from 1955.
The basic point is that sphere complements determine the homology, but not the homotopy type, in general. What is determined, however, is the stable homotopy type, which was conceived as a first approximation to homotopy type. Thus Spanier–Whitehead duality fits into stable homotopy theory.
Statement
Let X be a compact neighborhood retract in $\mathbb {R} ^{n}$. Then $X^{+}$ and $\Sigma ^{-n}\Sigma '(\mathbb {R} ^{n}\setminus X)$ are dual objects in the category of pointed spectra with the smash product as a monoidal structure. Here $X^{+}$ is the union of $X$ and a point, $\Sigma $ and $\Sigma '$ are reduced and unreduced suspensions respectively.
Taking homology and cohomology with respect to an Eilenberg–MacLane spectrum recovers Alexander duality formally.
References
• Spanier, Edwin H.; Whitehead, J. H. C. (1953), "A first approximation to homotopy theory", Proceedings of the National Academy of Sciences of the United States of America, 39 (7): 655–660, Bibcode:1953PNAS...39..655S, doi:10.1073/pnas.39.7.655, MR 0056290, PMC 1063840, PMID 16589320
• Spanier, Edwin H.; Whitehead, J. H. C. (1955), "Duality in homotopy theory.", Mathematika, 2: 56–80, doi:10.1112/s002557930000070x, MR 0074823
• tom Dieck, Tammo (2008), Algebraic topology, European Mathematical Society Publishing House, ISBN 978-3-03719-048-7
|
Wikipedia
|
Spark (mathematics)
In mathematics, more specifically in linear algebra, the spark of a $m\times n$ matrix $A$ is the smallest integer $k$ such that there exists a set of $k$ columns in $A$ which are linearly dependent. If all the columns are linearly independent, $\mathrm {spark} (A)$ is usually defined to be 1 more than the number of rows. The concept of matrix spark finds applications in error-correction codes, compressive sensing, and matroid theory, and provides a simple criterion for maximal sparsity of solutions to a system of linear equations.
The spark of a matrix is NP-hard to compute.
Definition
Formally, the spark of a matrix $A$ is defined as follows:
$\mathrm {spark} (A)=\min _{d\neq 0}\|d\|_{0}{\text{ s.t. }}Ad=0$
(Eq.1)
where $d$ is a nonzero vector and $\|d\|_{0}$ denotes its number of nonzero coefficients[1] ($\|d\|_{0}$ is also referred to as the size of the support of a vector). Equivalently, the spark of a matrix $A$ is the size of its smallest circuit $C$ (a subset of column indices such that $A_{C}x=0$ has a nonzero solution, but every subset of it does not[1]).
If all the columns are linearly independent, $\mathrm {spark} (A)$ is usually defined to be $m+1$ (if $A$ has m rows).[2][3]
By contrast, the rank of a matrix is the largest number $k$ such that some set of $k$ columns of $A$ is linearly independent.
Example
Consider the following matrix $A$.
$A={\begin{bmatrix}1&2&0&1\\1&2&0&2\\1&2&0&3\\1&0&-3&4\end{bmatrix}}$
The spark of this matrix equals 3 because:
• There is no set of 1 column of $A$ which are linearly dependent.
• There is no set of 2 columns of $A$ which are linearly dependent.
• But there is a set of 3 columns of $A$ which are linearly dependent. The first three columns are linearly dependent because ${\begin{pmatrix}1\\1\\1\\1\end{pmatrix}}-{\frac {1}{2}}{\begin{pmatrix}2\\2\\2\\0\end{pmatrix}}+{\frac {1}{3}}{\begin{pmatrix}0\\0\\0\\-3\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\end{pmatrix}}$.
Properties
If $n\geq m$, the following simple properties hold for the spark of a $m\times n$ matrix $A$:
• $\mathrm {spark} (A)=m+1\Rightarrow \mathrm {rank} (A)=m$ (If the spark equals $m+1$, then the matrix has full rank.)
• $\mathrm {spark} (A)=1$ if and only if the matrix has a zero column.
• $\mathrm {spark} (A)\leq \mathrm {rank} (A)+1$.
Criterion for uniqueness of sparse solutions
The spark yields a simple criterion for uniqueness of sparse solutions of linear equation systems.[4] Given a linear equation system $A\mathbf {x} =\mathbf {b} $. If this system has a solution $\mathbf {x} $ that satisfies $\|\mathbf {x} \|_{0}<{\frac {\mathrm {spark} (A)}{2}}$, then this solution is the sparsest possible solution. Here $\|\mathbf {x} \|_{0}$ denotes the number of nonzero entries of the vector $\mathbf {x} $.
Lower bound in terms of dictionary coherence
If the columns of the matrix $A$ are normalized to unit norm, we can lower bound its spark in terms of its dictionary coherence:[5][2]
$\mathrm {spark} (A)\geq 1+{\frac {1}{\mu (A)}}$
Here, the dictionary coherence $\mu (A)$ is defined as the maximum correlation between any two columns:
$\mu (A)=\max _{m\neq n}|\langle a_{m},a_{n}\rangle |=\max _{m\neq n}{\frac {|a_{m}^{\intercal }a_{n}|}{||a_{m}||_{2}||a_{n}||_{2}}}$.
Applications
The minimum distance of a linear code equals the spark of its parity-check matrix.
The concept of the spark is also of use in the theory of compressive sensing, where requirements on the spark of the measurement matrix are used to ensure stability and consistency of various estimation techniques.[6] It is also known in matroid theory as the girth of the vector matroid associated with the columns of the matrix. The spark of a matrix is NP-hard to compute.[1]
References
1. Tillmann, Andreas M.; Pfetsch, Marc E. (November 8, 2013). "The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing". IEEE Transactions on Information Theory. 60 (2): 1248–1259. arXiv:1205.2081. doi:10.1109/TIT.2013.2290112. S2CID 2788088.
2. Higham, Nicholas J.; Dennis, Mark R.; Glendinning, Paul; Martin, Paul A.; Santosa, Fadil; Tanner, Jared (2015-09-15). The Princeton Companion to Applied Mathematics. Princeton University Press. ISBN 978-1-4008-7447-7.
3. Manchanda, Pammy; Lozi, René; Siddiqi, Abul Hasan (2017-10-18). Industrial Mathematics and Complex Systems: Emerging Mathematical Models, Methods and Algorithms. Springer. ISBN 978-981-10-3758-0.
4. Elad, Michael (2010). Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing. pp. 24.
5. Elad, Michael (2010). Sparse and Redundant Representations From Theory to Applications in Signal and Image Processing. pp. 26.
6. Donoho, David L.; Elad, Michael (March 4, 2003), "Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization", Proc. Natl. Acad. Sci., 100 (5): 2197–2202, Bibcode:2003PNAS..100.2197D, doi:10.1073/pnas.0437847100, PMC 153464, PMID 16576749
|
Wikipedia
|
Sparkline
A sparkline is a very small line chart, typically drawn without axes or coordinates. It presents the general shape of a variation (typically over time) in some measurement, such as temperature or stock market price, in a simple and highly condensed way. Whereas a typical chart is designed to professionally show as much data as possible, and is set off from the flow of text, sparklines are intended to be succinct, memorable, and located where they are discussed. Sparklines are small enough to be embedded in text, or several sparklines may be grouped together as elements of a small multiple.
Example sparklines in small multiple
Index Day Value Change
Dow Jones10765.45−32.82 (−0.30%)
S&P 5001256.92−8.10 (−0.64%)
Sparklines showing the movement of the Dow Jones Industrial Average and S&P 500 during February 7, 2006
History
In 1762 Laurence Sterne used typographical devices in his sixth volume of The Life and Opinions of Tristram Shandy, Gentleman to illustrate his narrative proceeding: "These were the four lines I moved through my first, second, third, and fourth volumes,–".[1]
The 1888 monograph describing the 1883 eruption of the Krakatoa shows barometric signatures of the event obtained at various stations around the world in the same fashion, but in separate plates (VII & VIII), not within the text.[2]
In 1983, Edward Tufte had formally documented a graphical style, then called "intense continuous time-series", encouraging extreme compaction of visual information.[4] In early 1998, interface designer Peter Zelchenko introduced a feature called "inline charts", designed for the PC trading platform Medved QuoteTracker. This is believed to be the earliest known implementation of sparklines.[5] In 2006, the term sparkline itself was introduced by Edward Tufte for "small, high resolution graphics embedded in a context of words, numbers, images".[6][7] Tufte described sparklines as "data-intense, design-simple, word-sized graphics".[8]
On May 7, 2008, Microsoft employees filed a patent application for the implementation of sparklines in Microsoft Excel 2010. The application was published on November 12, 2009,[9] prompting Tufte[10] to express concern at the broad claims and lack of novelty of the patent.[11] On 23 January, 2009, MultiRacio Ltd. published an OpenOffice.org extension "EuroOffice Sparkline" to insert sparklines in OpenOffice.org Calc.[12] On March 3, 2022, LibreOffice developer Tomaž Vajngerl announced a new implementation of sparkline for LibreOffice Calc, including the support to import sparklines from OOXML Workbook format, and this is landed at 7.4 release.[13]
Usage
Sparklines are frequently used in line with text. For example:
The Dow Jones Industrial Average for February 7, 2006 .
The sparkline should be about the same height as the text around it. Tufte offers some useful design principles for the sizing of sparklines to maximize their readability.[7]
See also
• Kagi chart
References
1. Laurence Sterne, Life and Opinions of Tristram Shandy, Ann Ward (vol. 1–2), Dodsley (vol. 3–4), Becket & DeHondt (vol. 5–9), 1759-1767
2. Symons, G. J., Judd, J. W., Strachey, S. R., Wharton, W. J. L., Evans, F. J., Russell, F. A. R., ... & Whipple, G. M. (1888). The eruption of Krakatoa: And subsequent phenomena. Trübner & Company. Plate VII
3. Zelchenko, Peter; Medved, Michael. "Medved QuoteTracker screenshot". Wayback Machine. Internet Archive. Archived from the original on 13 October 1999. Retrieved 1 December 2015.
4. Tufte, Edward (1983). The Visual Display of Quantitative Information. Quoted in "ET Work on Sparklines". Retrieved from http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=000AIr.
5. "WaybackMachine snapshot from October 13, 1999, see "Screen Shots"". Archived from the original on 1999-11-27.{{cite web}}: CS1 maint: unfit URL (link)
6. Bissantz & Company GmbH. "Sparklines: Another masterpiece of Edward Tufte". Archived from the original on 2007-03-11.
7. Edward Tufte (November 2013). "Sparkline theory and practice". Edward Tufte forum.
8. Edward Tufte (2006). Beautiful Evidence. Graphics Press. ISBN 0-9613921-7-7.
9. "Sparklines in the grid". 2009-11-12. Retrieved 2009-11-19.
10. "Sparklines in Excel". 2009-07-17. Retrieved 2009-11-20.
11. "Microsoft makes patent claim for Sparklines". 2009-11-19. Retrieved 2009-11-19.
12. "EuroOffice Sparkline | OpenOffice.org repository for Extensions". Archived from the original on 2009-01-26. Retrieved 2018-07-06.
13. Vajngerl, Tomaž (2022-03-08). "Sparklines in Calc". Retrieved 2022-03-09.
Further reading
• "History of Sparklines", essay by Edward Tufte, edwardtufte.com
• "Sparkline in Google Sheets", blog.tryamigo.com
• "Micro Visualisations", thesis by Jonas Parnow, microvis.info
• "Everything you ever wanted to know about Sparklines in Google Sheets", benlcollins.com
|
Wikipedia
|
SAMV (algorithm)
SAMV (iterative sparse asymptotic minimum variance[1][2]) is a parameter-free superresolution algorithm for the linear inverse problem in spectral estimation, direction-of-arrival (DOA) estimation and tomographic reconstruction with applications in signal processing, medical imaging and remote sensing. The name was coined in 2013[1] to emphasize its basis on the asymptotically minimum variance (AMV) criterion. It is a powerful tool for the recovery of both the amplitude and frequency characteristics of multiple highly correlated sources in challenging environments (e.g., limited number of snapshots and low signal-to-noise ratio). Applications include synthetic-aperture radar,[2][3] computed tomography scan, and magnetic resonance imaging (MRI).
Definition
The formulation of the SAMV algorithm is given as an inverse problem in the context of DOA estimation. Suppose an $M$-element uniform linear array (ULA) receive $K$ narrow band signals emitted from sources located at locations $\mathbf {\theta } =\{\theta _{a},\ldots ,\theta _{K}\}$, respectively. The sensors in the ULA accumulates $N$ snapshots over a specific time. The $M\times 1$ dimensional snapshot vectors are
$\mathbf {y} (n)=\mathbf {A} \mathbf {x} (n)+\mathbf {e} (n),n=1,\ldots ,N$
where $\mathbf {A} =[\mathbf {a} (\theta _{1}),\ldots ,\mathbf {a} (\theta _{K})]$ is the steering matrix, ${\bf {x}}(n)=[{\bf {x}}_{1}(n),\ldots ,{\bf {x}}_{K}(n)]^{T}$ contains the source waveforms, and ${\bf {e}}(n)$ is the noise term. Assume that $\mathbf {E} \left({\bf {e}}(n){\bf {e}}^{H}({\bar {n}})\right)=\sigma {\bf {I}}_{M}\delta _{n,{\bar {n}}}$, where $\delta _{n,{\bar {n}}}$ is the Dirac delta and it equals to 1 only if $n={\bar {n}}$ and 0 otherwise. Also assume that ${\bf {e}}(n)$ and ${\bf {x}}(n)$ are independent, and that $\mathbf {E} \left({\bf {x}}(n){\bf {x}}^{H}({\bar {n}})\right)={\bf {P}}\delta _{n,{\bar {n}}}$, where ${\bf {P}}=\operatorname {Diag} ({p_{1},\ldots ,p_{K}})$. Let ${\bf {p}}$ be a vector containing the unknown signal powers and noise variance, ${\bf {p}}=[p_{1},\ldots ,p_{K},\sigma ]^{T}$.
The covariance matrix of ${\bf {y}}(n)$ that contains all information about ${\boldsymbol {\bf {p}}}$ is
${\bf {R}}={\bf {A}}{\bf {P}}{\bf {A}}^{H}+\sigma {\bf {I}}.$
This covariance matrix can be traditionally estimated by the sample covariance matrix ${\bf {R}}_{N}={\bf {Y}}{\bf {Y}}^{H}/N$ where ${\bf {Y}}=[{\bf {y}}(1),\ldots ,{\bf {y}}(N)]$. After applying the vectorization operator to the matrix ${\bf {R}}$, the obtained vector ${\bf {r}}({\boldsymbol {\bf {p}}})=\operatorname {vec} ({\bf {R}})$ is linearly related to the unknown parameter ${\boldsymbol {\bf {p}}}$ as
${\bf {r}}({\boldsymbol {\bf {p}}})=\operatorname {vec} ({\bf {R}})={\bf {S}}{\boldsymbol {\bf {p}}}$,
where ${\bf {S}}=[{\bf {S}}_{1},{\bar {\bf {a}}}_{K+1}]$, ${\bf {S}}_{1}=[{\bar {\bf {a}}}_{1},\ldots ,{\bar {\bf {a}}}_{K}]$, ${\bar {\bf {a}}}_{k}={\bf {a}}_{k}^{*}\otimes {\bf {a}}_{k}$, $k=1,\ldots ,K$, and let ${\bar {\bf {a}}}_{K+1}=\operatorname {vec} ({\bf {I}})$ where $\otimes $ is the Kronecker product.
SAMV algorithm
To estimate the parameter ${\boldsymbol {\bf {p}}}$ from the statistic ${\bf {r}}_{N}$, we develop a series of iterative SAMV approaches based on the asymptotically minimum variance criterion. From,[1] the covariance matrix $\operatorname {Cov} _{\boldsymbol {p}}^{\operatorname {Alg} }$ of an arbitrary consistent estimator of ${\boldsymbol {p}}$ based on the second-order statistic ${\bf {r}}_{N}$ is bounded by the real symmetric positive definite matrix
$\operatorname {Cov} _{\boldsymbol {p}}^{\operatorname {Alg} }\geq [{\bf {S}}_{d}^{H}{\bf {C}}_{r}^{-1}{\bf {S}}_{d}]^{-1},$
where ${\bf {S}}_{d}={\rm {d}}{\bf {r}}({\boldsymbol {p}})/{\rm {d}}{\boldsymbol {p}}$. In addition, this lower bound is attained by the covariance matrix of the asymptotic distribution of ${\hat {\bf {p}}}$ obtained by minimizing,
${\hat {\boldsymbol {p}}}=\arg \min _{\boldsymbol {p}}f({\boldsymbol {p}}),$
where $f({\boldsymbol {p}})=[{\bf {r}}_{N}-{\bf {r}}({\boldsymbol {p}})]^{H}{\bf {C}}_{r}^{-1}[{\bf {r}}_{N}-{\bf {r}}({\boldsymbol {p}})].$
Therefore, the estimate of ${\boldsymbol {\bf {p}}}$ can be obtained iteratively.
The $\{{\hat {p}}_{k}\}_{k=1}^{K}$ and ${\hat {\sigma }}$ that minimize $f({\boldsymbol {p}})$ can be computed as follows. Assume ${\hat {p}}_{k}^{(i)}$ and ${\hat {\sigma }}^{(i)}$ have been approximated to a certain degree in the $i$th iteration, they can be refined at the $(i+1)$th iteration by,
${\hat {p}}_{k}^{(i+1)}={\frac {{\bf {a}}_{k}^{H}{\bf {R}}^{-1{(i)}}{\bf {R}}_{N}{\bf {R}}^{-1{(i)}}{\bf {a}}_{k}}{({\bf {a}}_{k}^{H}{\bf {R}}^{-1{(i)}}{\bf {a}}_{k})^{2}}}+{\hat {p}}_{k}^{(i)}-{\frac {1}{{\bf {a}}_{k}^{H}{\bf {R}}^{-1{(i)}}{\bf {a}}_{k}}},\quad k=1,\ldots ,K$
${\hat {\sigma }}^{(i+1)}=\left(\operatorname {Tr} ({\bf {R}}^{-2^{(i)}}{\bf {R}}_{N})+{\hat {\sigma }}^{(i)}\operatorname {Tr} ({\bf {R}}^{-2^{(i)}})-\operatorname {Tr} ({\bf {R}}^{-1^{(i)}})\right)/{\operatorname {Tr} {({\bf {R}}^{-2^{(i)}})}},$
where the estimate of ${\bf {R}}$ at the $i$th iteration is given by ${\bf {R}}^{(i)}={\bf {A}}{\bf {P}}^{(i)}{\bf {A}}^{H}+{\hat {\sigma }}^{(i)}{\bf {I}}$ with ${\bf {P}}^{(i)}=\operatorname {Diag} ({\hat {p}}_{1}^{(i)},\ldots ,{\hat {p}}_{K}^{(i)})$.
Beyond scanning grid accuracy
The resolution of most compressed sensing based source localization techniques is limited by the fineness of the direction grid that covers the location parameter space.[4] In the sparse signal recovery model, the sparsity of the truth signal $\mathbf {x} (n)$ is dependent on the distance between the adjacent element in the overcomplete dictionary ${\bf {A}}$, therefore, the difficulty of choosing the optimum overcomplete dictionary arises. The computational complexity is directly proportional to the fineness of the direction grid, a highly dense grid is not computational practical. To overcome this resolution limitation imposed by the grid, the grid-free SAMV-SML (iterative Sparse Asymptotic Minimum Variance - Stochastic Maximum Likelihood) is proposed,[1] which refine the location estimates ${\boldsymbol {\bf {\theta }}}=(\theta _{1},\ldots ,\theta _{K})^{T}$ by iteratively minimizing a stochastic maximum likelihood cost function with respect to a single scalar parameter $\theta _{k}$.
Application to range-Doppler imaging
A typical application with the SAMV algorithm in SISO radar/sonar range-Doppler imaging problem. This imaging problem is a single-snapshot application, and algorithms compatible with single-snapshot estimation are included, i.e., matched filter (MF, similar to the periodogram or backprojection, which is often efficiently implemented as fast Fourier transform (FFT)), IAA,[5] and a variant of the SAMV algorithm (SAMV-0). The simulation conditions are identical to:[5] A $30$-element polyphase pulse compression P3 code is employed as the transmitted pulse, and a total of nine moving targets are simulated. Of all the moving targets, three are of $5$ dB power and the rest six are of $25$ dB power. The received signals are assumed to be contaminated with uniform white Gaussian noise of $0$ dB power.
The matched filter detection result suffers from severe smearing and leakage effects both in the Doppler and range domain, hence it is impossible to distinguish the $5$ dB targets. On contrary, the IAA algorithm offers enhanced imaging results with observable target range estimates and Doppler frequencies. The SAMV-0 approach provides highly sparse result and eliminates the smearing effects completely, but it misses the weak $5$ dB targets.
Open source implementation
An open source MATLAB implementation of SAMV algorithm could be downloaded here.
See also
• Array processing
• Matched filter
• Periodogram
• Filtered backprojection (Radon transform)
• MUltiple SIgnal Classification (MUSIC), a popular parametric superresolution method
• Pulse-Doppler radar
• Super-resolution imaging
• Compressed sensing
• Inverse problem
• Tomographic reconstruction
References
1. Abeida, Habti; Zhang, Qilin; Li, Jian; Merabtine, Nadjim (2013). "Iterative Sparse Asymptotic Minimum Variance Based Approaches for Array Processing" (PDF). IEEE Transactions on Signal Processing. 61 (4): 933–944. arXiv:1802.03070. Bibcode:2013ITSP...61..933A. doi:10.1109/tsp.2012.2231676. ISSN 1053-587X. S2CID 16276001.
2. Glentis, George-Othon; Zhao, Kexin; Jakobsson, Andreas; Abeida, Habti; Li, Jian (2014). "SAR imaging via efficient implementations of sparse ML approaches" (PDF). Signal Processing. 95: 15–26. doi:10.1016/j.sigpro.2013.08.003.
3. Yang, Xuemin; Li, Guangjun; Zheng, Zhi (2015-02-03). "DOA Estimation of Noncircular Signal Based on Sparse Representation". Wireless Personal Communications. 82 (4): 2363–2375. doi:10.1007/s11277-015-2352-z. S2CID 33008200.
4. Malioutov, D.; Cetin, M.; Willsky, A.S. (2005). "A sparse signal reconstruction perspective for source localization with sensor arrays". IEEE Transactions on Signal Processing. 53 (8): 3010–3022. Bibcode:2005ITSP...53.3010M. doi:10.1109/tsp.2005.850882. hdl:1721.1/87445. S2CID 6876056.
5. Yardibi, Tarik; Li, Jian; Stoica, Petre; Xue, Ming; Baggeroer, Arthur B. (2010). "Source Localization and Sensing: A Nonparametric Iterative Adaptive Approach Based on Weighted Least Squares". IEEE Transactions on Aerospace and Electronic Systems. 46 (1): 425–443. Bibcode:2010ITAES..46..425Y. doi:10.1109/taes.2010.5417172. hdl:1721.1/59588. S2CID 18834345.
|
Wikipedia
|
Sparse Fourier transform
The sparse Fourier transform (SFT) is a kind of discrete Fourier transform (DFT) for handling big data signals. Specifically, it is used in GPS synchronization, spectrum sensing and analog-to-digital converters.:[1]
The fast Fourier transform (FFT) plays an indispensable role on many scientific domains, especially on signal processing. It is one of the top-10 algorithms in the twentieth century.[2] However, with the advent of big data era, the FFT still needs to be improved in order to save more computing power. Recently, the sparse Fourier transform (SFT) has gained a considerable amount of attention, for it performs well on analyzing the long sequence of data with few signal components.
Definition
Consider a sequence xn of complex numbers. By Fourier series, xn can be written as
$x_{n}=(F^{*}X)_{n}=\sum _{k=0}^{N-1}X_{k}e^{j{\frac {2\pi }{N}}kn}.$
Similarly, Xk can be represented as
$X_{k}={\frac {1}{N}}(Fx)_{k}={\frac {1}{N}}\sum _{k=0}^{N-1}x_{n}e^{-j{\frac {2\pi }{N}}kn}.$
Hence, from the equations above, the mapping is $F:\mathbb {C} ^{N}\to \mathbb {C} ^{N}$.
Single frequency recovery
Assume only a single frequency exists in the sequence. In order to recover this frequency from the sequence, it is reasonable to utilize the relationship between adjacent points of the sequence.
Phase encoding
The phase k can be obtained by dividing the adjacent points of the sequence. In other words,
${\frac {x_{n+1}}{x_{n}}}=e^{j{\frac {2\pi }{N}}k}=\cos \left({\frac {2\pi k}{N}}\right)+j\sin \left({\frac {2\pi k}{N}}\right).$
Notice that $x_{n}\in \mathbb {C} ^{N}$.
An aliasing-based search
Seeking phase k can be done by Chinese remainder theorem (CRT).[3]
Take $k=104{,}134$ for an example. Now, we have three relatively prime integers 100, 101, and 103. Thus, the equation can be described as
$k=104{,}134\equiv \left\{{\begin{array}{rl}34&{\bmod {1}}00,\\3&{\bmod {1}}01,\\1&{\bmod {1}}03.\end{array}}\right.$
By CRT, we have
$k=104{,}134{\bmod {(}}100\cdot 101\cdot 103)=104{,}134{\bmod {1}}{,}040{,}300$
Randomly binning frequencies
Now, we desire to explore the case of multiple frequencies, instead of a single frequency. The adjacent frequencies can be separated by the scaling c and modulation b properties. Namely, by randomly choosing the parameters of c and b, the distribution of all frequencies can be almost a uniform distribution. The figure Spread all frequencies reveals by randomly binning frequencies, we can utilize the single frequency recovery to seek the main components.
$x_{n}'=X_{k}e^{j{\frac {2\pi }{N}}(c\cdot k+b)},$
where c is scaling property and b is modulation property.
By randomly choosing c and b, the whole spectrum can be looked like uniform distribution. Then, taking them into filter banks can separate all frequencies, including Gaussians,[4] indicator functions,[5][6] spike trains,[7][8][9][10] and Dolph-Chebyshev filters.[11] Each bank only contains a single frequency.
The prototypical SFT
Generally, all SFT follows the three stages[1]
Identifying frequencies
By randomly bining frequencies, all components can be separated. Then, taking them into filter banks, so each band only contains a single frequency. It is convenient to use the methods we mentioned to recover this signal frequency.
Estimating coefficients
After identifying frequencies, we will have many frequency components. We can use Fourier transform to estimate their coefficients.
$X_{k}'={\frac {1}{L}}\sum _{l=1}^{L}x_{n}'e^{-j{\frac {2\pi }{N}}n'\ell }$
Repeating
Finally, repeating these two stages can we extract the most important components from the original signal.
$x_{n}-\sum _{k'=1}^{k}X_{k}'e^{j{\frac {2\pi }{N}}k'n}$
Sparse Fourier transform in the discrete setting
In 2012, Hassanieh, Indyk, Katabi, and Price [11] proposed an algorithm that takes $O(k\log n\log(n/k))$ samples and runs in the same running time.
Sparse Fourier transform in the high dimensional setting
In 2014, Indyk and Kapralov [12] proposed an algorithm that takes $2^{O(d\log d)}k\log n$ samples and runs in nearly linear time in $n$. In 2016, Kapralov [13] proposed an algorithm that uses sublinear samples $2^{O(d^{2})}k\log n\log \log n$ and sublinear decoding time $k\log ^{O(d)}n$. In 2019, Nakos, Song, and Wang [14] introduced a new algorithm which uses nearly optimal samples $O(k\log n\log k)$ and requires nearly linear time decoding time. A dimension-incremental algorithm was proposed by Potts, Volkmer [15] based on sampling along rank-1 lattices.
Sparse Fourier transform in the continuous setting
There are several works about generalizing the discrete setting into the continuous setting.[16][17]
Implementations
There are several works based on MIT, MSU, ETH and Universtity of Technology Chemnitz [TUC]. Also, they are free online.
• MSU implementations
• ETH implementations
• MIT implementations
• GitHub
• TUC implementations
Further reading
Hassanieh, Haitham (2018). The Sparse Fourier Transform: Theory and Practice. Association for Computing Machinery and Morgan & Claypool. ISBN 978-1-94748-707-9.
Price, Eric (2013). Sparse Recovery and Fourier Sampling. MIT.
References
1. Gilbert, Anna C.; Indyk, Piotr; Iwen, Mark; Schmidt, Ludwig (2014). "Recent Developments in the Sparse Fourier Transform: A compressed Fourier transform for big data" (PDF). IEEE Signal Processing Magazine. 31 (5): 91–100. Bibcode:2014ISPM...31...91G. doi:10.1109/MSP.2014.2329131. hdl:1721.1/113828. S2CID 14585685.
2. Cipra, Barry A. (2000). "The best of the 20th century: Editors name top 10 algorithms". SIAM news 33.4. {{cite journal}}: Cite journal requires |journal= (help)
3. Iwen, M. A. (2010-01-05). "Combinatorial Sublinear-Time Fourier Algorithms". Foundations of Computational Mathematics. 10 (3): 303–338. doi:10.1007/s10208-009-9057-1. S2CID 1631513.
4. Haitham Hassanieh; Piotr Indyk; Dina Katabi; Eric Price (2012). Simple and Practical Algorithm for Sparse Fourier Transform. pp. 1183–1194. doi:10.1137/1.9781611973099.93. hdl:1721.1/73474. ISBN 978-1-61197-210-8.
5. A. C. Gilbert (2002). "Near-optimal sparse fourier representations via sampling". Proceedings of the thiry-fourth annual ACM symposium on Theory of computing. S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss. pp. 152–161. doi:10.1145/509907.509933. ISBN 1581134959. S2CID 14320243.
6. A. C. Gilbert; S. Muthukrishnan; M. Strauss (21 September 2005). "Improved time bounds for near-optimal sparse Fourier representations". In Papadakis, Manos; Laine, Andrew F; Unser, Michael A (eds.). Wavelets XI. Proceedings of SPIE. Vol. 5914. pp. 59141A. Bibcode:2005SPIE.5914..398G. doi:10.1117/12.615931. S2CID 12622592.
7. Ghazi, Badih; Hassanieh, Haitham; Indyk, Piotr; Katabi, Dina; Price, Eric; Lixin Shi (2013). "Sample-optimal average-case sparse Fourier Transform in two dimensions". 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). pp. 1258–1265. arXiv:1303.1209. doi:10.1109/Allerton.2013.6736670. ISBN 978-1-4799-3410-2. S2CID 6151728.
8. Iwen, M. A. (2010-01-05). "Combinatorial Sublinear-Time Fourier Algorithms". Foundations of Computational Mathematics. 10 (3): 303–338. doi:10.1007/s10208-009-9057-1. S2CID 1631513.
9. Mark A.Iwen (2013-01-01). "Improved approximation guarantees for sublinear-time Fourier algorithms". Applied and Computational Harmonic Analysis. 34 (1): 57–82. arXiv:1010.0014. doi:10.1016/j.acha.2012.03.007. ISSN 1063-5203. S2CID 16808450.
10. Pawar, Sameer; Ramchandran, Kannan (2013). "Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity". 2013 IEEE International Symposium on Information Theory. pp. 464–468. doi:10.1109/ISIT.2013.6620269. ISBN 978-1-4799-0446-4. S2CID 601496.
11. Hassanieh, Haitham; Indyk, Piotr; Katabi, Dina; Price, Eric (2012). "Nearly optimal sparse fourier transform". Proceedings of the forty-fourth annual ACM symposium on Theory of computing. STOC'12. ACM. pp. 563–578. arXiv:1201.2501. doi:10.1145/2213977.2214029. ISBN 9781450312455. S2CID 3760962.
12. Indyk, Piotr; Kapralov, Michael (2014). "Sample-optimal Fourier sampling in any constant dimension". Annual Symposium on Foundations of Computer Science. FOCS'14: 514–523. arXiv:1403.5804.
13. Kapralov, Michael (2016). "Sparse fourier transform in any constant dimension with nearly-optimal sample complexity in sublinear time". Proceedings of the forty-eighth annual ACM symposium on Theory of Computing. STOC'16. pp. 264–277. arXiv:1604.00845. doi:10.1145/2897518.2897650. ISBN 9781450341325. S2CID 11847086.
14. Nakos, Vasileios; Song, Zhao; Wang, Zhengyu (2019). "(Nearly) Sample-Optimal Sparse Fourier Transform in Any Dimension; RIPless and Filterless". Annual Symposium on Foundations of Computer Science. FOCS'19. arXiv:1909.11123.
15. Potts, Daniel; Volkmer, Toni (2016). "Sparse high-dimensional FFT based on rank-1 lattice sampling". Applied and Computational Harmonic Analysis. 41 (3): 713–748. doi:10.1016/j.acha.2015.05.002.
16. Price, Eric; Song, Zhao (2015). "A Robust Sparse Fourier Transform in the Continuous Setting". Annual Symposium on Foundations of Computer Science. FOCS'15: 583–600. arXiv:1609.00896.
17. Chen, Xue; Kane, Daniel M.; Price, Eric; Song, Zhao (2016). "Fourier-Sparse Interpolation without a Frequency Gap". Annual Symposium on Foundations of Computer Science. FOCS'16: 741–750. arXiv:1609.01361.
|
Wikipedia
|
Sparse matrix
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero.[1] There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense.[1] The number of zero-valued elements divided by the total number of elements (e.g., m × n for an m × n matrix) is sometimes referred to as the sparsity of the matrix.
Example of sparse matrix
$\left({\begin{smallmatrix}11&22&0&0&0&0&0\\0&33&44&0&0&0&0\\0&0&55&66&77&0&0\\0&0&0&0&0&88&0\\0&0&0&0&0&0&99\\\end{smallmatrix}}\right)$
The above sparse matrix contains only 9 non-zero elements, with 26 zero elements. Its sparsity is 74%, and its density is 26%.
Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory and numerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.
When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,[2] as they are common in the machine learning field.[3] Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeros. Sparse data is by nature more easily compressed and thus requires significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.
Storing a sparse matrix
A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element ai,j of the matrix and is accessed by the two indices i and j. Conventionally, i is the row index, numbered from top to bottom, and j is the column index, numbered from left to right. For an m × n matrix, the amount of memory required to store the matrix in this format is proportional to m × n (disregarding the fact that the dimensions of the matrix also need to be stored).
In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously.
Formats can be divided into two groups:
• Those that support efficient modification, such as DOK (Dictionary of keys), LIL (List of lists), or COO (Coordinate list). These are typically used to construct the matrices.
• Those that support efficient access and matrix operations, such as CSR (Compressed Sparse Row) or CSC (Compressed Sparse Column).
Dictionary of keys (DOK)
DOK consists of a dictionary that maps (row, column)-pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.[4]
List of lists (LIL)
LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[5]
Coordinate list (COO)
COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.[6]
Compressed sparse row (CSR, CRS or Yale format)
The compressed sparse row (CSR) or compressed row storage (CRS) or Yale format represents a matrix M by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications (Mx). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967.[7]
The CSR format stores a sparse m × n matrix M in row form using three (one-dimensional) arrays (V, COL_INDEX, ROW_INDEX). Let NNZ denote the number of nonzero entries in M. (Note that zero-based indices shall be used here.)
• The arrays V and COL_INDEX are of length NNZ, and contain the non-zero values and the column indices of those values respectively
• COL_INDEX contains the column in which the corresponding entry V is located.
• The array ROW_INDEX is of length m + 1 and encodes the index in V and COL_INDEX where the given row starts. This is equivalent to ROW_INDEX[j] encoding the total number of nonzeros above row j. The last element is NNZ , i.e., the fictitious index in V immediately after the last valid index NNZ - 1. [8]
For example, the matrix
${\begin{pmatrix}5&0&0&0\\0&8&0&0\\0&0&3&0\\0&6&0&0\\\end{pmatrix}}$
is a 4 × 4 matrix with 4 nonzero elements, hence
V = [ 5 8 3 6 ]
COL_INDEX = [ 0 1 2 1 ]
ROW_INDEX = [ 0 1 2 3 4[9]]
assuming a zero-indexed language.
To extract a row, we first define:
row_start = ROW_INDEX[row]
row_end = ROW_INDEX[row + 1]
Then we take slices from V and COL_INDEX starting at row_start and ending at row_end.
To extract the row 1 (the second row) of this matrix we set row_start=1 and row_end=2. Then we make the slices V[1:2] = [8] and COL_INDEX[1:2] = [1]. We now know that in row 1 we have one element at column 1 with value 8.
In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only when NNZ < (m (n − 1) − 1) / 2.
Another example, the matrix
${\begin{pmatrix}10&20&0&0&0&0\\0&30&0&40&0&0\\0&0&50&60&70&0\\0&0&0&0&0&80\\\end{pmatrix}}$
is a 4 × 6 matrix (24 entries) with 8 nonzero elements, so
V = [ 10 20 30 40 50 60 70 80 ]
COL_INDEX = [ 0 1 1 3 2 3 4 5 ]
ROW_INDEX = [ 0 2 4 7 8 ]
The whole is stored as 21 entries: 8 in V, 8 in COL_INDEX, and 5 in ROW_INDEX.
• ROW_INDEX splits the array V into rows: (10, 20) (30, 40) (50, 60, 70) (80), indicating the index of V (and COL_INDEX) where each row starts and ends;
• COL_INDEX aligns values in columns: (10, 20, ...) (0, 30, 0, 40, ...)(0, 0, 50, 60, 70, 0) (0, 0, 0, 0, 0, 80).
Note that in this format, the first value of ROW_INDEX is always zero and the last is always NNZ, so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored, NNZ would not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formula ROW_INDEX[i + 1] − ROW_INDEX[i] works for any row i. Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix.
The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combines ROW_INDEX and COL_INDEX into a single array and handles the diagonal of the matrix separately.[10]
For logical adjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation.
It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University.[11]
Compressed sparse column (CSC or CCS)
CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is (val, row_ind, col_ptr), where val is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; row_ind is the row indices corresponding to the values; and, col_ptr is the list of val indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. See scipy.sparse.csc_matrix. This is the traditional format for specifying a sparse matrix in MATLAB (via the sparse function).
Special structure
Banded
An important special type of sparse matrices is band matrix, defined as follows. The lower bandwidth of a matrix A is the smallest number p such that the entry ai,j vanishes whenever i > j + p. Similarly, the upper bandwidth is the smallest number p such that ai,j = 0 whenever i < j − p (Golub & Van Loan 1996, §1.2.1). For example, a tridiagonal matrix has lower bandwidth 1 and upper bandwidth 1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.
${\begin{bmatrix}X&X&X&\cdot &\cdot &\cdot &\cdot &\\X&X&\cdot &X&X&\cdot &\cdot &\\X&\cdot &X&\cdot &X&\cdot &\cdot &\\\cdot &X&\cdot &X&\cdot &X&\cdot &\\\cdot &X&X&\cdot &X&X&X&\\\cdot &\cdot &\cdot &X&X&X&\cdot &\\\cdot &\cdot &\cdot &\cdot &X&\cdot &X&\\\end{bmatrix}}$
Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.
By rearranging the rows and columns of a matrix A it may be possible to obtain a matrix A′ with a lower bandwidth. A number of algorithms are designed for bandwidth minimization.
Diagonal
A very efficient structure for an extreme case of band matrices, the diagonal matrix, is to store just the entries in the main diagonal as a one-dimensional array, so a diagonal n × n matrix requires only n entries.
Symmetric
A symmetric sparse matrix arises as the adjacency matrix of an undirected graph; it can be stored efficiently as an adjacency list.
Block diagonal
A block-diagonal matrix consists of sub-matrices along its diagonal blocks. A block-diagonal matrix A has the form
$\mathbf {A} ={\begin{bmatrix}\mathbf {A} _{1}&0&\cdots &0\\0&\mathbf {A} _{2}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &\mathbf {A} _{n}\end{bmatrix}},$
where Ak is a square matrix for all k = 1, ..., n.
Reducing fill-in
The fill-in of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.
There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.
Solving sparse matrix equations
Both iterative and direct methods exist for sparse matrix solving.
Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products $Ax_{i}$, where matrix $A$ is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.
Software
Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source:
• SuiteSparse, a suite of sparse matrix algorithms, geared toward the direct solution of sparse linear systems.
• PETSc, a large C library, containing many different matrix solvers for a variety of matrix storage formats.
• Trilinos, a large C++ library, with sub-libraries dedicated to the storage of dense and sparse matrices and solution of corresponding linear systems.
• Eigen3 is a C++ library that contains several sparse matrix solvers. However, none of them are parallelized.
• MUMPS (MUltifrontal Massively Parallel sparse direct Solver), written in Fortran90, is a frontal solver.
• deal.II, a finite element library that also has a sub-library for sparse linear systems and their solution.
• DUNE, another finite element library that also has a sub-library for sparse linear systems and their solution.
• PaStix.
• SuperLU.
• Armadillo provides a user-friendly C++ wrapper for BLAS and LAPACK.
• SciPy provides support for several sparse matrix formats, linear algebra, and solvers.
• SPArse Matrix (spam) R and Python package for sparse matrices.
• Wolfram Language Tools for handling sparse arrays
• ALGLIB is a C++ and C# library with sparse linear algebra support
• ARPACK Fortran 77 library for sparse matrix diagonalization and manipulation, using the Arnoldi algorithm
• SPARSE Reference (old) NIST package for (real or complex) sparse matrix diagonalization
• SLEPc Library for solution of large scale linear systems and sparse matrices
• Sympiler, a domain-specific code generator and library for solving linear systems and quadratic programming problems.
• scikit-learn, a Python library for machine learning, provides support for sparse matrices and solvers.
• sprs implements sparse matrix data structures and linear algebra algorithms in pure Rust.
• Basic Matrix Library (bml) supports several sparse matrix formats and linear algebra algorithms with bindings for C, C++, and Fortran.
• SPARSKIT A basic tool-kit for sparse matrix computations from University of Minnesota.
History
The term sparse matrix was possibly coined by Harry Markowitz who initiated some pioneering work but then left the field.[12]
See also
• Matrix representation
• Pareto principle
• Ragged matrix
• Single-entry matrix
• Skyline matrix
• Sparse graph code
• Sparse file
• Harwell-Boeing file format
• Matrix Market exchange formats
Notes
1. Yan, Di; Wu, Tao; Liu, Ying; Gao, Yang (2017). "An efficient sparse-dense matrix multiplication on a multicore system". 2017 IEEE 17th International Conference on Communication Technology (ICCT). IEEE. pp. 1880–1883. doi:10.1109/icct.2017.8359956. ISBN 978-1-5090-3944-9. The computation kernel of DNN is large sparse-dense matrix multiplication. In the field of numerical analysis, a sparse matrix is a matrix populated primarily with zeros as elements of the table. By contrast, if the number of non-zero elements in a matrix is relatively large, then it is commonly considered a dense matrix. The fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density). Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices.
2. "Cerebras Systems Unveils the Industry's First Trillion Transistor Chip". www.businesswire.com. 2019-08-19. Retrieved 2019-12-02. The WSE contains 400,000 AI-optimized compute cores. Called SLAC™ for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation
3. "Argonne National Laboratory Deploys Cerebras CS-1, the World's Fastest Artificial Intelligence Computer | Argonne National Laboratory". www.anl.gov (Press release). Retrieved 2019-12-02. The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.
4. See scipy.sparse.dok_matrix
5. See scipy.sparse.lil_matrix
6. See scipy.sparse.coo_matrix
7. Buluç, Aydın; Fineman, Jeremy T.; Frigo, Matteo; Gilbert, John R.; Leiserson, Charles E. (2009). Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks (PDF). ACM Symp. on Parallelism in Algorithms and Architectures. CiteSeerX 10.1.1.211.5256.
8. Saad, Yousef (2003). Iterative methods for sparse linear systems. SIAM.
9. Query: why this element '4'here ?
10. Bank, Randolph E.; Douglas, Craig C. (1993), "Sparse Matrix Multiplication Package (SMMP)" (PDF), Advances in Computational Mathematics, 1: 127–137, doi:10.1007/BF02070824, S2CID 6412241
11. Eisenstat, S. C.; Gursky, M. C.; Schultz, M. H.; Sherman, A. H. (April 1977). "Yale Sparse Matrix Package" (PDF). Archived (PDF) from the original on April 6, 2019. Retrieved 6 April 2019.
12. Oral history interview with Harry M. Markowitz, pp. 9, 10.
References
• Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Baltimore: Johns Hopkins. ISBN 978-0-8018-5414-9.
• Stoer, Josef; Bulirsch, Roland (2002). Introduction to Numerical Analysis (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-95452-3.
• Tewarson, Reginald P. (May 1973). Sparse Matrices (Part of the Mathematics in Science & Engineering series). Academic Press Inc. (This book, by a professor at the State University of New York at Stony Book, was the first book exclusively dedicated to Sparse Matrices. Graduate courses using this as a textbook were offered at that University in the early 1980s).
• Bank, Randolph E.; Douglas, Craig C. "Sparse Matrix Multiplication Package" (PDF).
• Pissanetzky, Sergio (1984). Sparse Matrix Technology. Academic Press. ISBN 9780125575805.
• Snay, Richard A. (1976). "Reducing the profile of sparse symmetric matrices". Bulletin Géodésique. 50 (4): 341–352. Bibcode:1976BGeod..50..341S. doi:10.1007/BF02521587. hdl:2027/uc1.31210024848523. S2CID 123079384. Also NOAA Technical Memorandum NOS NGS-4, National Geodetic Survey, Rockville, MD.[1]
• Jennifer Scott and Miroslav Tuma: "Algorithms for Sparse Linear Systems", Birkhauser, (2023), DOI: https://doi.org/10.1007/978-3-031-25820-6 (Open Access Book)
Further reading
• Gibbs, Norman E.; Poole, William G.; Stockmeyer, Paul K. (1976). "A comparison of several bandwidth and profile reduction algorithms". ACM Transactions on Mathematical Software. 2 (4): 322–330. doi:10.1145/355705.355707. S2CID 14494429.
• Gilbert, John R.; Moler, Cleve; Schreiber, Robert (1992). "Sparse matrices in MATLAB: Design and Implementation". SIAM Journal on Matrix Analysis and Applications. 13 (1): 333–356. CiteSeerX 10.1.1.470.1054. doi:10.1137/0613024.
• Sparse Matrix Algorithms Research at the Texas A&M University.
• SuiteSparse Matrix Collection
• SMALL project A EU-funded project on sparse models, algorithms and dictionary learning for large-scale data.
Well-known data structures
Types
• Collection
• Container
Abstract
• Associative array
• Multimap
• Retrieval Data Structure
• List
• Stack
• Queue
• Double-ended queue
• Priority queue
• Double-ended priority queue
• Set
• Multiset
• Disjoint-set
Arrays
• Bit array
• Circular buffer
• Dynamic array
• Hash table
• Hashed array tree
• Sparse matrix
Linked
• Association list
• Linked list
• Skip list
• Unrolled linked list
• XOR linked list
Trees
• B-tree
• Binary search tree
• AA tree
• AVL tree
• Red–black tree
• Self-balancing tree
• Splay tree
• Heap
• Binary heap
• Binomial heap
• Fibonacci heap
• R-tree
• R* tree
• R+ tree
• Hilbert R-tree
• Trie
• Hash tree
Graphs
• Binary decision diagram
• Directed acyclic graph
• Directed acyclic word graph
• List of data structures
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
Numerical linear algebra
Key concepts
• Floating point
• Numerical stability
Problems
• System of linear equations
• Matrix decompositions
• Matrix multiplication (algorithms)
• Matrix splitting
• Sparse problems
Hardware
• CPU cache
• TLB
• Cache-oblivious algorithm
• SIMD
• Multiprocessing
Software
• MATLAB
• Basic Linear Algebra Subprograms (BLAS)
• LAPACK
• Specialized libraries
• General purpose software
1. Saad, Yousef (2003). Iterative methods for sparse linear systems. SIAM.
|
Wikipedia
|
Antimagic square
An antimagic square of order n is an arrangement of the numbers 1 to n2 in a square, such that the sums of the n rows, the n columns and the two diagonals form a sequence of 2n + 2 consecutive integers. The smallest antimagic squares have order 4.[1] Antimagic squares contrast with magic squares, where each row, column, and diagonal sum must have the same value.[2]
Examples
Order 4 antimagic squares
↙
34
215513→ 35
163712→ 38
98141→ 32
641110→ 31
↓
33
↓
30
↓
37
↓
36
↘
29
↙
32
113312→ 29
159410→ 38
72168→ 33
146115→ 36
↓
37
↓
30
↓
34
↓
35
↘
31
In both of these antimagic squares of order 4, the rows, columns and diagonals sum to ten different numbers in the range 29–38.[2]
Order 5 antimagic squares
5 8 20 9 22
19 23 13 10 2
21 6 3 15 25
11 18 7 24 1
12 14 17 4 16
21 18 6 17 4
7 3 13 16 24
5 20 23 11 1
15 8 19 2 25
14 12 9 22 10
In the antimagic square of order 5 on the left, the rows, columns and diagonals sum up to numbers between 60 and 71.[2] In the antimagic square on the right, the rows, columns and diagonals add up to numbers in the range 59–70.[1]
Open problems
The following questions about antimagic squares have not been solved.
• How many antimagic squares of a given order exist?
• Do antimagic squares exist for all orders greater than 3?
• Is there a simple proof that no antimagic square of order 3 exists?
Generalizations
A sparse antimagic square (SAM) is a square matrix of size n by n of nonnegative integers whose nonzero entries are the consecutive integers $1,\ldots ,m$ for some $m\leq n^{2}$, and whose row-sums and column-sums constitute a set of consecutive integers.[3] If the diagonals are included in the set of consecutive integers, the array is known as a sparse totally anti-magic square (STAM). Note that a STAM is not necessarily a SAM, and vice versa.
A filling of the n × n square with the numbers 1 to n2 in a square, such that the rows, columns, and diagonals all sum to different values has been called a heterosquare.[4] (Thus, they are the relaxation in which no particular values are required for the row, column, and diagonal sums.) There are no heterosquares of order 2, but heterosquares exist for any order n ≥ 3: if n is odd, filling the square in a spiral pattern will produce a heterosquare,[4] and if n is even, a heterosquare results from writing the numbers 1 to n2 in order, then exchanging 1 and 2. It is suspected that there are exactly 3120 essentially different heterosquares of order 3.[5]
See also
• Magic square
• J. A. Lindon
References
1. W., Weisstein, Eric. "Antimagic Square". mathworld.wolfram.com. Retrieved 2016-12-03.{{cite web}}: CS1 maint: multiple names: authors list (link)
2. "Anti-magic Squares". www.magic-squares.net. Retrieved 2016-12-03.
3. Gray, I. D.; MacDougall, J.A. (2006). "Sparse anti-magic squares and vertex-magic labelings of bipartite graphs". Discrete Mathematics. 306 (22): 2878–2892. doi:10.1016/j.disc.2006.04.032.
4. Weisstein, Eric W. "Heterosquare". MathWorld.
5. Peter Bartsch's Heterosquares at magic-squares.net
Magic polygons
Types
• Magic circle
• Magic hexagon
• Magic hexagram
• Magic square
• Magic star
• Magic triangle
Related shapes
• Alphamagic square
• Antimagic square
• Geomagic square
• Heterosquare
• Pandiagonal magic square
• Most-perfect magic square
Higher dimensional shapes
• Magic cube
• classes
• Magic hypercube
• Magic hyperbeam
Classification
• Associative magic square
• Pandiagonal magic square
• Multimagic square
Related concepts
• Latin square
• Word square
• Number Scrabble
• Eight queens puzzle
• Magic constant
• Magic graph
• Magic series
External links
• Weisstein, Eric W. "Antimagic Square". MathWorld.
|
Wikipedia
|
Dense graph
In mathematics, a dense graph is a graph in which the number of edges is close to the maximal number of edges (where every pair of vertices is connected by one edge). The opposite, a graph with only a few edges, is a sparse graph. The distinction of what constitutes a dense or sparse graph is ill-defined, and is often represented by 'roughly equal to' statements. Due to this, the way that density is defined often depends on the context of the problem.
The graph density of simple graphs is defined to be the ratio of the number of edges |E| with respect to the maximum possible edges.
For undirected simple graphs, the graph density is:
$D={\frac {|E|}{\binom {|V|}{2}}}={\frac {2|E|}{|V|(|V|-1)}}$
For directed, simple graphs, the maximum possible edges is twice that of undirected graphs (as there are two directions to an edge) so the density is:
$D={\frac {|E|}{2{\binom {|V|}{2}}}}={\frac {|E|}{|V|(|V|-1)}}$
where E is the number of edges and V is the number of vertices in the graph. The maximum number of edges for an undirected graph is ${\binom {|V|}{2}}={\frac {|V|(|V|-1)}{2}}$, so the maximal density is 1 (for complete graphs) and the minimal density is 0 (Coleman & Moré 1983).
For families of graphs of increasing size, one often calls them sparse if $D\rightarrow 0$ as $|V|\rightarrow \infty $. Sometimes, in computer science, a more restrictive definition of sparse is used like $|E|=O(|V|\log |V|)$ or even $|E|=O(|V|)$.
Upper density
Upper density is an extension of the concept of graph density defined above from finite graphs to infinite graphs. Intuitively, an infinite graph has arbitrarily large finite subgraphs with any density less than its upper density, and does not have arbitrarily large finite subgraphs with density greater than its upper density. Formally, the upper density of a graph G is the infimum of the values α such that the finite subgraphs of G with density α have a bounded number of vertices. It can be shown using the Erdős–Stone theorem that the upper density can only be 1 or one of the superparticular ratios 0, 1/2, 2/3, 3/4, 4/5, … n/n + 1 (see, e.g., Diestel, edition 5, p. 189).
Sparse and tight graphs
Lee & Streinu (2008) and Streinu & Theran (2009) define a graph as being (k, l)-sparse if every nonempty subgraph with n vertices has at most kn − l edges, and (k, l)-tight if it is (k, l)-sparse and has exactly kn − l edges. Thus trees are exactly the (1,1)-tight graphs, forests are exactly the (1,1)-sparse graphs, and graphs with arboricity k are exactly the (k,k)-sparse graphs. Pseudoforests are exactly the (1,0)-sparse graphs, and the Laman graphs arising in rigidity theory are exactly the (2,3)-tight graphs.
Other graph families not characterized by their sparsity can also be described in this way. For instance the facts that any planar graph with n vertices has at most 3n – 6 edges (except for graphs with fewer than 3 vertices), and that any subgraph of a planar graph is planar, together imply that the planar graphs are (3,6)-sparse. However, not every (3,6)-sparse graph is planar. Similarly, outerplanar graphs are (2,3)-sparse and planar bipartite graphs are (2,4)-sparse.
Streinu and Theran show that testing (k,l)-sparsity may be performed in polynomial time when k and l are integers and 0 ≤ l < 2k.
For a graph family, the existence of k and l such that the graphs in the family are all (k,l)-sparse is equivalent to the graphs in the family having bounded degeneracy or having bounded arboricity. More precisely, it follows from a result of Nash-Williams (1964) that the graphs of arboricity at most a are exactly the (a,a)-sparse graphs. Similarly, the graphs of degeneracy at most d are exactly the ( d+1/2, 1)-sparse graphs.
Sparse and dense classes of graphs
Nešetřil & Ossona de Mendez (2010) considered that the sparsity/density dichotomy makes it necessary to consider infinite graph classes instead of single graph instances. They defined somewhere dense graph classes as those classes of graphs for which there exists a threshold t such that every complete graph appears as a t-subdivision in a subgraph of a graph in the class. To the contrary, if such a threshold does not exist, the class is nowhere dense. Properties of the nowhere dense vs somewhere dense dichotomy are discussed in Nešetřil & Ossona de Mendez (2012).
The classes of graphs with bounded degeneracy and of nowhere dense graphs are both included in the biclique-free graphs, graph families that exclude some complete bipartite graph as a subgraph (Telle & Villanger 2012).
See also
• Bounded expansion
• Dense subgraph
References
• Paul E. Black, Sparse graph, from Dictionary of Algorithms and Data Structures, Paul E. Black (ed.), NIST. Retrieved on 29 September 2005.
• Coleman, Thomas F.; Moré, Jorge J. (1983), "Estimation of sparse Jacobian matrices and graph coloring Problems", SIAM Journal on Numerical Analysis, 20 (1): 187–209, doi:10.1137/0720013.
• Diestel, Reinhard (2005), Graph Theory, Graduate Texts in Mathematics, Springer-Verlag, ISBN 3-540-26183-4, OCLC 181535575.
• Lee, Audrey; Streinu, Ileana (2008), "Pebble game algorithms and sparse graphs", Discrete Mathematics, 308 (8): 1425–1437, arXiv:math/0702129, doi:10.1016/j.disc.2007.07.104, MR 2392060.
• Nash-Williams, C. St. J. A. (1964), "Decomposition of finite graphs into forests", Journal of the London Mathematical Society, 39 (1): 12, doi:10.1112/jlms/s1-39.1.12, MR 0161333
• Preiss, first (1998), Data Structures and Algorithms with Object-Oriented Design Patterns in C++, John Wiley & Sons, ISBN 0-471-24134-2.
• Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2010), "From Sparse Graphs to Nowhere Dense Structures: Decompositions, Independence, Dualities and Limits", European Congress of Mathematics, European Mathematical Society, pp. 135–165
• Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Heidelberg: Springer, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058.
• Streinu, I.; Theran, L. (2009), "Sparse hypergraphs and pebble game algorithms", European Journal of Combinatorics, 30 (8): 1944–1964, arXiv:math/0703921, doi:10.1016/j.ejc.2008.12.018.
• Telle, Jan Arne; Villanger, Yngve (2012), "FPT algorithms for domination in biclique-free graphs", in Epstein, Leah; Ferragina, Paolo (eds.), Algorithms – ESA 2012: 20th Annual European Symposium, Ljubljana, Slovenia, September 10–12, 2012, Proceedings, Lecture Notes in Computer Science, vol. 7501, Springer, pp. 802–812, doi:10.1007/978-3-642-33090-2_69.
|
Wikipedia
|
Sparse grid
Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally developed by the Russian mathematician Sergey A. Smolyak, a student of Lazar Lyusternik, and are based on a sparse tensor product construction. Computer algorithms for efficient implementations of such grids were later developed by Michael Griebel and Christoph Zenger.
Curse of dimensionality
The standard way of representing multidimensional functions are tensor or full grids. The number of basis functions or nodes (grid points) that have to be stored and processed depend exponentially on the number of dimensions.
The curse of dimensionality is expressed in the order of the integration error that is made by a quadrature of level $l$, with $N_{l}$ points. The function has regularity $r$, i.e. is $r$ times differentiable. The number of dimensions is $d$.
$|E_{l}|=O(N_{l}^{-{\frac {r}{d}}})$
Smolyak's quadrature rule
Smolyak found a computationally more efficient method of integrating multidimensional functions based on a univariate quadrature rule $Q^{(1)}$. The $d$-dimensional Smolyak integral $Q^{(d)}$ of a function $f$ can be written as a recursion formula with the tensor product.
$Q_{l}^{(d)}f=\left(\sum _{i=1}^{l}\left(Q_{i}^{(1)}-Q_{i-1}^{(1)}\right)\otimes Q_{l-i+1}^{(d-1)}\right)f$
The index to $Q$ is the level of the discretization. If a 1-dimension integration on level $i$ is computed by the evaluation of $O(2^{i})$ points, the error estimate for a function of regularity $r$ will be $|E_{l}|=O\left(N_{l}^{-r}\left(\log N_{l}\right)^{(d-1)(r+1)}\right)$
Further reading
• Brumm, J.; Scheidegger, S. (2017). "Using Adaptive Sparse Grids to Solve High-Dimensional Dynamic Models" (PDF). Econometrica. 85 (5): 1575–1612. doi:10.3982/ECTA12216.
• Garcke, Jochen (2012). "Sparse Grids in a Nutshell" (PDF). In Garcke, Jochen; Griebel, Michael (eds.). Sparse Grids and Applications. Springer. pp. 57–80. ISBN 978-3-642-31702-6.
• Zenger, Christoph (1991). "Sparse Grids" (PDF). In Hackbusch, Wolfgang (ed.). Parallel Algorithms for Partial Differential Equations. Vieweg. pp. 241–251. ISBN 3-528-07631-3.
External links
• A memory efficient data structure for regular sparse grids
• Finite difference scheme on sparse grids
• Visualization on sparse grids
• Datamining on sparse grids, J.Garcke, M.Griebel (pdf)
|
Wikipedia
|
Sparse identification of non-linear dynamics
Sparse identification of nonlinear dynamics (SINDy) is a data-driven algorithm for obtaining dynamical systems from data.[1] Given a series of snapshots of a dynamical system and its corresponding time derivatives, SINDy performs a sparsity-promoting regression (such as LASSO) on a library of nonlinear candidate functions of the snapshots against the derivatives to find the governing equations. This procedure relies on the assumption that most physical systems only have a few dominant terms which dictate the dynamics, given an appropriately selected coordinate system and quality training data.[2] It has been applied to identify the dynamics of fluids, based on proper orthogonal decomposition, as well as other complex dynamical systems, such as biological networks.[3]
Mathematical Overview
First, consider a dynamical system of the form
${\dot {\textbf {x}}}={\frac {d}{dt}}{\textbf {x}}(t)={\textbf {f}}({\textbf {x}}(t)),$
where ${\textbf {x}}(t)\in \mathbb {R} ^{n}$ is a state vector (snapshot) of the system at time $t$ and the function ${\textbf {f}}({\textbf {x}}(t))$ defines the equations of motion and constraints of the system. The time derivative may be either prescribed or numerically approximated from the snapshots.
With ${\textbf {x}}$ and ${\dot {\textbf {x}}}$ sampled at $m$ equidistant points in time ($t_{1},t_{2},\cdots ,t_{m}$), these can be arranged into matrices of the form
${\bf {{X}={\begin{bmatrix}{\bf {{x}^{T}(t_{1})}}\\{\bf {{x}^{T}(t_{2})}}\\\vdots \\{\bf {{x}^{T}(t_{m})}}\end{bmatrix}}={\begin{bmatrix}x_{1}(t_{1})&x_{2}(t_{1})&\cdots &x_{n}(t_{1})\\x_{1}(t_{2})&x_{2}(t_{2})&\cdots &x_{n}(t_{2})\\\vdots &\vdots &\ddots &\vdots \\x_{1}(t_{m})&x_{2}(t_{m})&\cdots &x_{n}(t_{m})\end{bmatrix}},}}$
and similarly for ${\dot {\textbf {X}}}$.
Next, a library ${\bf {{\Theta }({\textbf {X}})}}$ of nonlinear candidate functions of the columns of ${\textbf {X}}$ is constructed, which may be constant, polynomial, or more exotic functions (like trigonometric and rational terms, and so on):
$\ \ \ {\bf {{\Theta }({\bf {{X})={\begin{bmatrix}\vline &\vline &\vline &\vline &&\vline &\vline &\\1&{\bf {X}}&{\bf {{X}^{2}}}&{\bf {{X}^{3}}}&\cdots &\sin({\bf {{X})}}&\cos({\bf {{X})}}&\cdots \\\vline &\vline &\vline &\vline &&\vline &\vline &\end{bmatrix}}}}}}$
The number of possible model structures from this library is combinatorically high. ${\textbf {f}}({\textbf {x}}(t))$ is then substituted by ${\bf {{\Theta }({\textbf {X}})}}$ and a vector of coefficients ${\bf {{\Xi }=\left[{\bf {{\xi }_{1}{\bf {{\xi }_{2}\cdots {\bf {{\xi }_{n}}}}}}}\right]}}$ determining the active terms in ${\textbf {f}}({\textbf {x}}(t))$:
${\dot {\bf {X}}}={\bf {{\Theta }({\bf {{X}){\bf {\Xi }}}}}}$
Because only a few terms are expected to be active at each point in time, an assumption is made that ${\textbf {f}}({\textbf {x}}(t))$ admits a sparse representation in ${\bf {{\Theta }({\textbf {X}})}}$. This then becomes an optimization problem in finding a sparse ${\bf {\Xi }}$ which optimally embeds ${\dot {\textbf {X}}}$. In other words, a parsimonious model is obtained by performing least squares regression on the system (4) with sparsity-promoting ($L_{1}$) regularization
${\bf {{\xi }_{k}={\underset {\bf {{\xi }'_{k}}}{\arg \min }}\left|\left|{\dot {\bf {X}}}_{k}-{\bf {{\Theta }({\bf {{X}){\bf {{\xi }'_{k}}}}}}}\right|\right|_{2}+\lambda \left|\left|{\bf {{\xi }'_{k}}}\right|\right|_{1},}}$
where $\lambda $ is a regularization parameter. Finally, the sparse set of ${\bf {{\xi }_{k}}}$ can be used to reconstruct the dynamical system:
${\dot {x}}_{k}={\bf {{\Theta }({\bf {{x}){\bf {{\xi }_{k}}}}}}}$
References
1. Brunton, Steven L.; Kutz, J. Nathan (2022-05-05). Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. doi:10.1017/9781009089517. ISBN 9781009089517. Retrieved 2022-10-25. {{cite book}}: |website= ignored (help)
2. Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan (2016-04-12). "Discovering governing equations from data by sparse identification of nonlinear dynamical systems". Proceedings of the National Academy of Sciences. 113 (15): 3932–3937. arXiv:1509.03580. Bibcode:2016PNAS..113.3932B. doi:10.1073/pnas.1517384113. ISSN 0027-8424. PMC 4839439. PMID 27035946.
3. Mangan, Niall M.; Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan (2016-05-26). "Inferring biological networks by sparse identification of nonlinear dynamics". arXiv:1605.08368 [math.DS].
|
Wikipedia
|
Sparse matrix–vector multiplication
Sparse matrix–vector multiplication (SpMV) of the form y = Ax is a widely used computational kernel existing in many scientific applications. The input matrix A is sparse. The input vector x and the output vector y are dense. In the case of a repeated y = Ax operation involving the same input matrix A but possibly changing numerical values of its elements, A can be preprocessed to reduce both the parallel and sequential run time of the SpMV kernel.[1]
See also
• Matrix–vector multiplication
• General-purpose computing on graphics processing units#Kernels
References
1. "Hypergraph Partitioning Based Models and Methods for Exploiting Cache Locality in Sparse Matrix-Vector Multiplication". Retrieved 13 April 2014.
|
Wikipedia
|
Sparse polynomial
In mathematics, a sparse polynomial (also lacunary polynomial[1] or fewnomial)[2] is a polynomial that has far fewer terms than its degree and number of variables would suggest. For example, x10 + 3x3 - 1 is a sparse polynomial as it is a trinomial with a degree of 10.
The motivation for studying sparse polynomials is to concentrate on the structure of a polynomial's monomials instead of its degree, as one can see, for instance, by comparing Bernstein-Kushnirenko theorem with Bezout's theorem. Research on sparse polynomials has also included work on algorithms whose running time grows as a function of the number of terms rather than on the degree,[3] for problems including polynomial multiplication[4][5], division,[6] root-finding algorithms,[7] and polynomial greatest common divisors.[8] Sparse polynomials have also been used in pure mathematics, especially in the study of Galois groups, because it has been easier to determine the Galois groups of certain families of sparse polynomials than it is for other polynomials.[9]
The algebraic varieties determined by sparse polynomials have a simple structure, which is also reflected in the structure of the solutions of certain related differential equations.[2] Additionally, a sparse positivstellensatz exists for univariate sparse polynomials. It states that the non-negativity of a polynomial can be certified by sos polynomials whose degree only depends on the number of monomials of the polynomial.[10]
Sparse polynomials oftentimes come up in sum or difference of powers equations. The sum of two cubes states that (a + b)(a2 - 2ab + b2) = a3 + b3. a3 + b3, here, is a sparse polynomial.
References
1. Rédei, L. (1973), Lacunary polynomials over finite fields, translated by Földes, I., Elsevier, MR 0352060
2. Khovanskiĭ, A. G. (1991), Fewnomials, Translations of Mathematical Monographs, vol. 88, translated by Zdravkovska, Smilka, Providence, Rhode Island: American Mathematical Society, doi:10.1090/mmono/088, ISBN 0-8218-4547-0, MR 1108621
3. Roche, Daniel S. (2018), "What can (and can't) we do with sparse polynomials?", in Kauers, Manuel; Ovchinnikov, Alexey; Schost, Éric (eds.), Proceedings of the 2018 ACM on International Symposium on Symbolic and Algebraic Computation, ISSAC 2018, New York, NY, USA, July 16-19, 2018, Association for Computing Machinery, pp. 25–30, arXiv:1807.08289, doi:10.1145/3208976.3209027, S2CID 49868973
4. Nakos, Vasileios (2020), "Nearly optimal sparse polynomial multiplication", IEEE Transactions on Information Theory, 66 (11): 7231–7236, arXiv:1901.09355, doi:10.1109/TIT.2020.2989385, MR 4173637, S2CID 59316578
5. Giorgi, Pascal; Grenet, Bruno; Perret du Cray, Armelle (2020), "Essentially optimal sparse polynomial multiplication", Proceedings of the 45th International Symposium on Symbolic and Algebraic Computation (ISSAC '20)., Association for Computing Machinery, pp. 202–209, arXiv:2001.11959, doi:10.1145/3373207.3404026, S2CID 211003922
6. Giorgi, Pascal; Grenet, Bruno; Perret du Cray, Armelle (2021), "On Exact Division and Divisibility Testing for Sparse Polynomials", Proceedings of the 2021 on International Symposium on Symbolic and Algebraic Computation (ISSAC '21)., Association for Computing Machinery, pp. 163–170, arXiv:2102.04826, doi:10.1145/3452143.3465539, S2CID 231855563
7. Pan, Victor Y. (2020), "Acceleration of subdivision root-finding for sparse polynomials", Computer algebra in scientific computing, Lecture Notes in Computer Science, vol. 12291, Cham: Springer, pp. 461–477, doi:10.1007/978-3-030-60026-6_27, MR 4184190, S2CID 224820309
8. Zippel, Richard (1979), "Probabilistic algorithms for sparse polynomials", Symbolic and algebraic computation (EUROSAM '79, Internat. Sympos., Marseille, 1979), Lecture Notes in Computer Science, vol. 72, Berlin, New York: Springer, pp. 216–226, MR 0575692
9. Cohen, S. D.; Movahhedi, A.; Salinier, A. (1999), "Galois groups of trinomials", Journal of Algebra, 222 (2): 561–573, doi:10.1006/jabr.1999.8033, MR 1734229
10. Averkov, Gennadiy; Scheiderer, Claus (2023-03-07). "Convex hulls of monomial curves, and a sparse positivstellensatz". arXiv:2303.03826 [math.OC].
See also
• Askold Khovanskii, one of the main contributors to the theory of fewnomials.
|
Wikipedia
|
Sparse language
In computational complexity theory, a sparse language is a formal language (a set of strings) such that the complexity function, counting the number of strings of length n in the language, is bounded by a polynomial function of n. They are used primarily in the study of the relationship of the complexity class NP with other classes. The complexity class of all sparse languages is called SPARSE.
Sparse languages are called sparse because there are a total of 2n strings of length n, and if a language only contains polynomially many of these, then the proportion of strings of length n that it contains rapidly goes to zero as n grows. All unary languages are sparse. An example of a nontrivial sparse language is the set of binary strings containing exactly k 1 bits for some fixed k; for each n, there are only ${\binom {n}{k}}$ strings in the language, which is bounded by nk.
Relationships to other complexity classes
SPARSE contains TALLY, the class of unary languages, since these have at most one string of any one length. Although not all languages in P/poly are sparse, there is a polynomial-time Turing reduction from any language in P/poly to a sparse language.[1] Fortune showed in 1979 that if any sparse language is co-NP-complete, then P = NP;[2] Mahaney used this to show in 1982 that if any sparse language is NP-complete, then P = NP (this is Mahaney's theorem).[3] A simpler proof of this based on left-sets was given by Ogihara and Watanabe in 1991.[4] Mahaney's argument does not actually require the sparse language to be in NP (because the existence of an NP-hard sparse set implies the existence of an NP-complete sparse set), so there is a sparse NP-hard set if and only if P = NP.[5] Further, E ≠ NE if and only if there exist sparse languages in NP that are not in P.[6] There is a Turing reduction (as opposed to the Karp reduction from Mahaney's theorem) from an NP-complete language to a sparse language if and only if ${\textbf {NP}}\subseteq {\textbf {P}}/{\text{poly}}$. In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Ogihara, showed that if there exists a sparse P-complete problem, then L = P.[7]
References
1. Jin-Yi Cai. Lecture 11: P=poly, Sparse Sets, and Mahaney's Theorem. CS 810: Introduction to Complexity Theory. The University of Wisconsin–Madison. September 18, 2003 (PDF)
2. S. Fortune. A note on sparse complete sets. SIAM Journal on Computing, volume 8, issue 3, pp.431–433. 1979.
3. S. R. Mahaney. Sparse complete sets for NP: Solution of a conjecture by Berman and Hartmanis. Journal of Computer and System Sciences 25:130–143. 1982.
4. M. Ogiwara and O. Watanabe. On polynomial time bounded truth-table reducibility of NP sets to sparse sets. SIAM Journal on Computing volume 20, pp.471–483. 1991.
5. Balcázar, José Luis; Díaz, Josep; Gabarró, Joaquim (1990). Structural Complexity II. Springer. pp. 130–131. ISBN 3-540-52079-1.
6. Juris Hartmanis, Neil Immerman, Vivian Sewelson. Sparse Sets in NP-P: EXPTIME versus NEXPTIME. Information and Control, volume 65, issue 2/3, pp.158–181. 1985. At ACM Digital Library
7. Jin-Yi Cai and D. Sivakumar. Sparse hard sets for P: resolution of a conjecture of Hartmanis. Journal of Computer and System Sciences, volume 58, issue 2, pp.280–296. 1999. ISSN 0022-0000. At Citeseer
External links
• Lance Fortnow. Favorite Theorems: Small Sets. April 18, 2006.
• William Gasarch. Sparse Sets (Tribute to Mahaney). June 29, 2007.
• Complexity Zoo: SPARSE
|
Wikipedia
|
Sparsely totient number
In mathematics, a sparsely totient number is a certain kind of natural number. A natural number, n, is sparsely totient if for all m > n,
$\varphi (m)>\varphi (n)$
where $\varphi $ is Euler's totient function. The first few sparsely totient numbers are:
2, 6, 12, 18, 30, 42, 60, 66, 90, 120, 126, 150, 210, 240, 270, 330, 420, 462, 510, 630, 660, 690, 840, 870, 1050, 1260, 1320, 1470, 1680, 1890, 2310, 2730, 2940, 3150, 3570, 3990, 4620, 4830, 5460, 5610, 5670, 6090, 6930, 7140, 7350, 8190, 9240, 9660, 9870, ... (sequence A036913 in the OEIS).
The concept was introduced by David Masser and Peter Man-Kit Shiu in 1986. As they showed, every primorial is sparsely totient.
Properties
• If P(n) is the largest prime factor of n, then $\liminf P(n)/\log n=1$.
• $P(n)\ll \log ^{\delta }n$ holds for an exponent $\delta =37/20$.
• It is conjectured that $\limsup P(n)/\log n=2$.
References
• Baker, Roger C.; Harman, Glyn (1996). "Sparsely totient numbers". Ann. Fac. Sci. Toulouse, VI. Sér., Math. 5 (2): 183–190. doi:10.5802/afst.826. ISSN 0240-2963. Zbl 0871.11060.
• Masser, D.W.; Shiu, P. (1986). "On sparsely totient numbers". Pac. J. Math. 121 (2): 407–426. doi:10.2140/pjm.1986.121.407. ISSN 0030-8730. MR 0819198. S2CID 55350630. Zbl 0538.10006.
Totient function
• Euler's totient function φ(n)
• Jordan's totient function Jk(n)
• Carmichael function (reduced totient function) λ(n)
• Nontotient
• Noncototient
• Highly totient number
• Highly cototient number
• Sparsely totient number
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Wikipedia
|
Cut (graph theory)
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets.[1] Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions.
In a flow network, an s–t cut is a cut that requires the source and the sink to be in different subsets, and its cut-set only consists of edges going from the source's side to the sink's side. The capacity of an s–t cut is defined as the sum of the capacity of each edge in the cut-set.
Definition
A cut C = (S,T) is a partition of V of a graph G = (V,E) into two subsets S and T. The cut-set of a cut C = (S,T) is the set {(u,v) ∈ E | u ∈ S, v ∈ T} of edges that have one endpoint in S and the other endpoint in T. If s and t are specified vertices of the graph G, then an s–t cut is a cut in which s belongs to the set S and t belongs to the set T.
In an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the value or weight is defined by the sum of the weights of the edges crossing the cut.
A bond is a cut-set that does not have any other cut-set as a proper subset.
Minimum cut
Main article: Minimum cut
A cut is minimum if the size or weight of the cut is not larger than the size of any other cut. The illustration on the right shows a minimum cut: the size of this cut is 2, and there is no cut of size 1 because the graph is bridgeless.
The max-flow min-cut theorem proves that the maximum network flow and the sum of the cut-edge weights of any minimum cut that separates the source and the sink are equal. There are polynomial-time methods to solve the min-cut problem, notably the Edmonds–Karp algorithm.[2]
Maximum cut
Main article: Maximum cut
A cut is maximum if the size of the cut is not smaller than the size of any other cut. The illustration on the right shows a maximum cut: the size of the cut is equal to 5, and there is no cut of size 6, or |E| (the number of edges), because the graph is not bipartite (there is an odd cycle).
In general, finding a maximum cut is computationally hard.[3] The max-cut problem is one of Karp's 21 NP-complete problems.[4] The max-cut problem is also APX-hard, meaning that there is no polynomial-time approximation scheme for it unless P = NP.[5] However, it can be approximated to within a constant approximation ratio using semidefinite programming.[6]
Note that min-cut and max-cut are not dual problems in the linear programming sense, even though one gets from one problem to other by changing min to max in the objective function. The max-flow problem is the dual of the min-cut problem.[7]
Sparsest cut
The sparsest cut problem is to bipartition the vertices so as to minimize the ratio of the number of edges across the cut divided by the number of vertices in the smaller half of the partition. This objective function favors solutions that are both sparse (few edges crossing the cut) and balanced (close to a bisection). The problem is known to be NP-hard, and the best known approximation algorithm is an $O({\sqrt {\log n}})$ approximation due to Arora, Rao & Vazirani (2009).[8]
Cut space
The family of all cut sets of an undirected graph is known as the cut space of the graph. It forms a vector space over the two-element finite field of arithmetic modulo two, with the symmetric difference of two cut sets as the vector addition operation, and is the orthogonal complement of the cycle space.[9][10] If the edges of the graph are given positive weights, the minimum weight basis of the cut space can be described by a tree on the same vertex set as the graph, called the Gomory–Hu tree.[11] Each edge of this tree is associated with a bond in the original graph, and the minimum cut between two nodes s and t is the minimum weight bond among the ones associated with the path from s to t in the tree.
See also
• Connectivity (graph theory)
• Graph cuts in computer vision
• Split (graph theory)
• Vertex separator
• Bridge (graph theory)
• Cutwidth
References
1. "NetworkX 2.6.2 documentation". networkx.algorithms.cuts.cut_size. Archived from the original on 2021-11-18. Retrieved 2021-12-10. A cut is a partition of the nodes of a graph into two sets. The cut size is the sum of the weights of the edges "between" the two sets of nodes.
2. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, p. 563,655,1043, ISBN 0-262-03293-7.
3. Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, A2.2: ND16, p. 210, ISBN 0-7167-1045-5.
4. Karp, R. M. (1972), "Reducibility among combinatorial problems", in Miller, R. E.; Thacher, J. W. (eds.), Complexity of Computer Computation, New York: Plenum Press, pp. 85–103.
5. Khot, S.; Kindler, G.; Mossel, E.; O’Donnell, R. (2004), "Optimal inapproximability results for MAX-CUT and other two-variable CSPs?" (PDF), Proceedings of the 45th IEEE Symposium on Foundations of Computer Science, pp. 146–154, archived (PDF) from the original on 2019-07-15, retrieved 2019-08-29.
6. Goemans, M. X.; Williamson, D. P. (1995), "Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming", Journal of the ACM, 42 (6): 1115–1145, doi:10.1145/227683.227684.
7. Vazirani, Vijay V. (2004), Approximation Algorithms, Springer, pp. 97–98, ISBN 3-540-65367-8.
8. Arora, Sanjeev; Rao, Satish; Vazirani, Umesh (2009), "Expander flows, geometric embeddings and graph partitioning", J. ACM, ACM, 56 (2): 1–37, doi:10.1145/1502793.1502794.
9. Gross, Jonathan L.; Yellen, Jay (2005), "4.6 Graphs and Vector Spaces", Graph Theory and Its Applications (2nd ed.), CRC Press, pp. 197–207, ISBN 9781584885054.
10. Diestel, Reinhard (2012), "1.9 Some linear algebra", Graph Theory, Graduate Texts in Mathematics, vol. 173, Springer, pp. 23–28.
11. Korte, B. H.; Vygen, Jens (2008), "8.6 Gomory–Hu Trees", Combinatorial Optimization: Theory and Algorithms, Algorithms and Combinatorics, vol. 21, Springer, pp. 180–186, ISBN 978-3-540-71844-4.
|
Wikipedia
|
Sparsity matroid
A sparsity matroid is a mathematical structure that captures how densely a multigraph is populated with edges. To unpack this a little, sparsity is a measure of density of a graph that bounds the number of edges in any subgraph. The property of having a particular matroid as its density measure is invariant under graph isomorphisms and so it is a graph invariant.
The graphs we are concerned with generalise simple directed graphs by allowing multiple same-oriented edges between pairs of vertices. Matroids are a quite general mathematical abstraction that describe the amount of indepdendence in, variously, points in geometric space and paths in a graph; when applied to characterising sparsity, matroids describe certain sets of sparse graphs. These matroids are connected to the structural rigidity of graphs and their ability to be decomposed into edge-disjoint spanning trees via the Tutte and Nash-Williams theorem. There is a family of efficient algorithms, known as pebble games, for determining if a multigraph meets the given sparsity condition.
Definitions
$(k,l)$-sparse multigraph..[1] A multigraph $G=(V,E)$ is $(k,l)$-sparse, where $k$ and $l$ are non-negative integers, if for every subgraph $G'=(V',E')$ of $G$, we have $|E|\leq k|V|-l$.
$(k,l)$-tight multigraph. [1] A multigraph $G=(V,E)$ is $(k,l)$-tight if it is $(k,l)$-sparse and $|E|=k|V|-l$.
$[a,b]$-sparse and tight multigraph. [2] A multigraph $G=(V,E\cup F)$ is $[a,b]$-sparse if there exists a subset $F'\subset F$ such that the subgraph $G'=(V,E\cup F')$ is $(a,a)$-sparse and the subgraph $G''=(V,F\setminus F')$ is $(b,b)$-sparse. The multigraph $G$ is $[a,b]$-tight if, additionally, $|E\cup F|=(a+b)|V|-(a+b)$.
$(k,l)$-sparsity matroid. [1] The $(k,l)$-sparsity matroid is a matroid whose ground set is the edge set of the complete multigraph on $n$ vertices, with loop multiplicity $k-l$ and edge multiplicity $2k-l$, and whose independent sets are $(k,l)$-sparse multigraphs on $n$ vertices. The bases of the matroid are the $(k,l)$-tight multigraphs and the circuits are the $(k,l)$-sparse multigraphs $G=(V,E)$ that satisfy $|E|=k|V|-l+1$.
The first examples of sparsity matroids can be found in.[3] Not all pairs $(k,l)$ induce a matroid.
Pairs (k,l) that form a matroid
The following result provides sufficient restrictions on $(k,l)$ for the existence of a matroid.
Theorem. [1] The $(k,l)$-sparse multigraphs on $n$ vertices are the independent sets of a matroid if
• $n\geq 1$ and $l\in [0,k]$;
• $n\geq 2$ and $l\in (k,{\frac {3}{2}}k)$; or
• $n=2$ or $n\geq {\frac {l}{2k-l}}$ and $l\in [{\frac {3}{2}}k,2k)$.
Some consequences of this theorem are that $(2,3)$-sparse multigraphs form a matroid while $(3,6)$-sparse multigraphs do not. Hence, the bases, i.e., $(2,3)$-tight multigraphs, must all have the same number of edges and can be constructed using techniques discussed below. On the other hand, without this matroidal structure, maximally $(3,6)$-sparse multigraphs will have different numbers of edges, and it is interesting to identify the one with the maximum number of edges.
Connections to rigidity and decomposition
Structural rigidity is about determining if almost all, i.e. generic, embeddings of a (simple or multi) graph in some $d$-dimensional metric space are rigid. More precisely, this theory gives combinatorial characterizations of such graphs. In Euclidean space, Maxwell showed that independence in a sparsity matroid is necessary for a graph to be generically rigid in any dimension.
Maxwell Direction. [4] If a graph $G$ is generically minimally rigid in $d$-dimensions, then it is independent in the $\left(d,{d+1 \choose 2}\right)$-sparsity matroid.
The converse of this theorem was proved in $2$-dimensions, yielding a complete combinatorial characterization of generically rigid graphs in $\mathbb {R} ^{2}$. However, the converse is not true for $d\geq 3$, see combinatorial characterizations of generically rigid graphs.
Other sparsity matroids have been used to give combinatorial characterizations of generically rigid multigraphs for various types of frameworks, see rigidity for other types of frameworks. The following table summarizes these results by stating the type of generic rigid framework in a given dimension and the equivalent sparsity condition. Let $kG$ be the multigraph obtained by duplicating the edges of a multigraph $G$ $k$ times.
Generic minimally rigid framework Sparsity condition
Bar-joint framework in $\mathbb {R} ^{2}$ $(2,3)$-tight[5][6]
Bar-joint framework in $\mathbb {R} ^{2}$ under general polyhedral norms $(2,2)$-tight[7]
Body-bar framework in $\mathbb {R} ^{d}$ $\left({d+1 \choose 2},{d+1 \choose 2}\right)$-tight[8]
$(d-2)$-plate-bar framework in $\mathbb {R} ^{d}$ $\left({d+1 \choose 2}-1,{d+1 \choose 2}\right)$-tight[9][10]
Body-hinge framework in $\mathbb {R} ^{d}$ $\left({d+1 \choose 2}-1\right)G$ is $\left({d+1 \choose 2},{d+1 \choose 2}\right)$-tight[9][10][11]
Body-cad framework in $\mathbb {R} ^{2}$ primitive cad graph is $\left[1,2\right]$-tight[12]
Body-cad framework with no coincident points in $\mathbb {R} ^{3}$ primitive cad graph is $\left[3,3\right]$-tight[12]
The Tutte and Nash-Williams theorem shows that $(k,k)$-tight graphs are equivalent to graphs that can be decomposed into $k$ edge-disjoint spanning trees, called $k$-arborescences. A $(k,a)$-arborescence is a multigraph $G$ such that adding $a$ edges to $G$ yields a $k$-arborescence. For $l\in [k,2k)$, a $(k,l)$-sparse multigraph is a $(k,l-k)$-arborescence;[13] this was first shown for $(2,3)$ sparse graphs.[14][15] Additionally, many of the rigidity and sparsity results above can be written in terms of edge-disjoint spanning trees.
Constructing sparse multigraphs
This section gives methods to construct various sparse multigraphs using operations defined in constructing generically rigid graphs. Since these operations are defined for a given dimension, let a $(k,0)$-extension be a $k$-dimensional $0$-extension, i.e., a $0$-extension where the new vertex is connected to $k$ distinct vertices. Likewise, a $(k,1)$-extension is a $k$-dimensional $1$-extension.
General (k,l)-sparse multigraphs
The first construction is for $\left(k,k\right)$-tight graphs. A generalized $(k,1)$-extension is a triple $(k,1,j)$, where $j$ edges are removed, for $1\leq j\leq k$, and the new vertex $u$ is connected to the vertices of these $j$ edges and to $k-j$ distinct vertices. The usual $(k,1)$-extension is a $(k,1,1)$-extension.
Theorem. [16] A multigraph is $(k,k)$-tight if and only if it can be constructed from a single vertex via a sequence of $(k,0)$- and $(k,1,j)$-extensions.
This theorem was then extended to general $(k,l)$-tight graphs. Consider another generalization of a $1$-extension denoted by $(k,1,i,j)$, for $0\leq j\leq i\leq k$, where $j$ edges are removed, the new vertex $u$ is connected to the vertices of these $j$ edges, $i-j$ loops are added to $u$, and $u$ is connected to $k-i$ other distinct vertices. Also, let $P_{l}$ denote a multigraph with a single node and $l$ loops.
Theorem. [17] A multigraph $G$ is $(k,l)$-tight for
• $l\in [1,k]$ if and only if $G$ can be constructed from $P_{k-l}$ via a sequence of $(k,1,i,j)$-extensions, such that $0\leq j\leq i\leq k-1$ and $i-j\leq k-l$;
• $l=0$ if and only if $G$ can be constructed from $P_{k-l}$ via a sequence of $(k,1,i,j)$-extensions, such that $0\leq j\leq i\leq k$ and $i-j\leq k-l$.
Neither of these constructions are sufficient when the graph is simple.[18] The next results are for $(k,l)$-sparse hypergraphs. A hypergraph is $s$-uniform if each of its edges contains exactly $s$ vertices. First, conditions are established for the existence of $(k,l)$-tight hypergraphs.
Theorem. [19] There exists an $n_{0}$ such that for all $n\geq n_{0}$, there exist $s$-uniform hypergraphs on $n$ vertices that are $(k,l)$-tight.
The next result extends the Tutte and Nash-Williams theorem to hypergraphs.
Theorem. [19] If $G$ is a $(k,l)$-tight hypergraph, for $l\geq k$, then $G$ is a $(k,l-k)$ arborescence, where the $l-k$ added edges contain at least two vertices.
A map-hypergraph is a hypergraph that admits an orientation such that each vertex has an out-degree of $1$. A $k$-map-hypergraph is a map-hypergraph that can be decomposed into $k$ edge-disjoint map-hypergraphs.
Theorem. [19] If $G$ is a $(k,l)$-tight hypergraph, for $k\geq l$, then $g$ is the union of an $l$-arborescence and a $(k-l)$-map-hypergraph.
(2,3)-sparse graphs
The first result shows that $(2,3)$-tight graphs, i.e., generically minimally rigid graphs in $\mathbb {R} ^{2}$ have Henneberg-constructions.
Theorem. [6][20] A graph $G$ is $(2,3)$-tight if and only if it can be constructed from the complete graph $K_{2}$ via a sequence of $0$- and $1$-extensions.
The next result shows how to construct $(2,3)$-circuits. In this setting, a $2$-sum combines two graphs by identifying two $K_{2}$ subgraphs in each and then removing the combined edge from the resulting graph.
Theorem. [21] A graph $G$ is a $(2,3)$-circuit if and only if it can be constructed from disjoint copies of the complete graph $K_{4}$ via a sequence of $1$-extensions within connected components and $2$-sums of connected components.
The method for constructing $3$-connected $(2,3)$-circuits is even simpler.
Theorem. [21] A graph $G$ is a $3$-connected $(2,3)$-circuit if and only if it can be constructed from the complete graph $K_{4}$ via a sequence of $1$-extensions.
These circuits also have the following combinatorial property.
Theorem. [21] If a graph $G$ is a $(2,3)$-circuit that is not the complete graph $K_{4}$, then $G$ has at least $3$ vertices of degree $3$ such that performing a $1$-reduction on any one of these vertices yields another $(2,3)$-circuit.
(2,2)-sparse graphs
The next results shows how to construct $(2,2)$-circuits using two different construction methods. For the first method, the base graphs are shown in Figure 2 and the three join operations are shows in Figure 2. A $1$-join identifies a $K_{2}$ subgraph in $G_{1}$ with an edge of a $K_{4}$ subgraph in $G_{1}$, and removes the other two vertices of the $K_{4}$. A $2$-join identifies an edge of a $K_{4}$ subgraph in $G_{1}$ with an edge of a $K_{4}$ subgraph in $G_{2}$, and removes the other vertices on both $K_{4}$ subgraphs. A $3$-join takes a degree $3$ vertex $v_{1}$ in $G_{1}$ and a degree $3$ vertex $v_{2}$ in $G_{2}$ and removes them, then it adds $3$ edges between $G_{1}$ and $G_{2}$ such that there is a bijection between the neighbors of $v_{1}$ and $v_{2}$.
The second method uses $2$-dimensional vertex-splitting, defined in the constructing generically rigid graphs, and a vertex-to-$K_{4}$ operation, which replace a vertex $v$ of a graph with a $K_{4}$ graph and connects each neighbor of $v$ to any vertex of the $K_{4}$. Theorem. A graph $G$ is a $(2,2)$-circuit if and only if
• $G$ can be constructed from disjoint copies of the base graphs via a sequence of $1$-extensions within connected components and $1$-, $2$-, and $3$- sums of connected components;[18]
• $G$ can be constructed from $K_{4}$ via a sequence of $0$- and $1$-extensions, vertex-to-$K_{4}$, and $2$-dimensional vertex-splitting operations[22]
(2,1)-sparse graphs
The following result gives a construction method for $(2,1)$-tight graphs and extends the Tutte and Nash-Williams theorem to these graphs. For the construction, the base graphs are $K_{5}$ with an edge removed or the $2$-sum of two $K_{4}$ graphs (the shared edge is not removed), see the middle graph in Figure 2. Also, an edge-joining operation adds a single edge between two graphs.
Theorem. [22] A graph $G$ is $(2,1)$-tight if and only if
• $G$ can be obtained from $K_{5}$ with an edge removed or the $2$-sum of two $K_{4}$ graphs via a sequence of $0$- and $1$-extensions, vertex-to-$K_{4}$, $2$-dimensional vertex-splitting, and edge-joining operations;
• $G$ is the edge-disjoint union of a spanning tree and a spanning graph in which every connected component contains exactly one cycle.
Pebble games
There is a family of efficient network-flow based algorithms for identifying $(k,l)$-sparse graphs, where $l\in [0,2k]$.[1] The first of these types of algorithms was for $(2,3)$-sparse graphs. These algorithms are explained on the Pebble game page.
References
1. Lee, Audrey; Streinu, Ileana (2008-04-28). "Pebble game algorithms and sparse graphs". Discrete Mathematics. 308 (8): 1425–1437. doi:10.1016/j.disc.2007.07.104. ISSN 0012-365X. S2CID 2826.
2. Haller, Kirk; Lee-St.John, Audrey; Sitharam, Meera; Streinu, Ileana; White, Neil (2012-10-01). "Body-and-cad geometric constraint systems". Computational Geometry. 45 (8): 385–405. doi:10.1016/j.comgeo.2010.06.003. ISSN 0925-7721.
3. Lorea, M. (1979-01-01). "On matroidal families". Discrete Mathematics. 28 (1): 103–106. doi:10.1016/0012-365X(79)90190-0. ISSN 0012-365X.
4. F.R.S, J. Clerk Maxwell (1864-04-01). "XLV. On reciprocal figures and diagrams of forces". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 27 (182): 250–261. doi:10.1080/14786446408643663. ISSN 1941-5982.
5. Pollaczek‐Geiringer, H. (1927). "Über die Gliederung ebener Fachwerke". Zeitschrift für Angewandte Mathematik und Mechanik. 7 (1): 58–72. Bibcode:1927ZaMM....7...58P. doi:10.1002/zamm.19270070107. ISSN 1521-4001.
6. Laman, G. (1970-10-01). "On graphs and rigidity of plane skeletal structures". Journal of Engineering Mathematics. 4 (4): 331–340. Bibcode:1970JEnMa...4..331L. doi:10.1007/BF01534980. ISSN 1573-2703. S2CID 122631794.
7. Kitson, Derek (2015-09-01). "Finite and Infinitesimal Rigidity with Polyhedral Norms". Discrete & Computational Geometry. 54 (2): 390–411. doi:10.1007/s00454-015-9706-x. ISSN 1432-0444. S2CID 15520982.
8. Tay, Tiong-Seng (1984-02-01). "Rigidity of multi-graphs. I. Linking rigid bodies in n-space". Journal of Combinatorial Theory. Series B. 36 (1): 95–112. doi:10.1016/0095-8956(84)90016-9. ISSN 0095-8956.
9. Tay, Tiong-Seng (1991-09-01). "Linking (n − 2)-dimensional panels inn-space I: (k − 1,k)-graphs and (k − 1,k)-frames". Graphs and Combinatorics. 7 (3): 289–304. doi:10.1007/BF01787636. ISSN 1435-5914. S2CID 40089590.
10. Tay, Tiong-Seng (1989-12-01). "Linking (n − 2)-dimensional panels inn-space II: (n − 2, 2)-frameworks and body and hinge structures". Graphs and Combinatorics. 5 (1): 245–273. doi:10.1007/BF01788678. ISSN 1435-5914. S2CID 8154653.
11. Whiteley, Walter (1988-05-01). "The Union of Matroids and the Rigidity of Frameworks". SIAM Journal on Discrete Mathematics. 1 (2): 237–255. doi:10.1137/0401025. ISSN 0895-4801.
12. Lee-St.John, Audrey; Sidman, Jessica (2013-02-01). "Combinatorics and the rigidity of CAD systems". Computer-Aided Design. Solid and Physical Modeling 2012. 45 (2): 473–482. arXiv:1210.0451. doi:10.1016/j.cad.2012.10.030. ISSN 0010-4485. S2CID 14851683.
13. Haas, Ruth (2002). "Characterizations of arboricity of graphs". Ars Combinatoria. 63.
14. Lovász, L.; Yemini, Y. (1982-03-01). "On Generic Rigidity in the Plane". SIAM Journal on Algebraic and Discrete Methods. 3 (1): 91–98. doi:10.1137/0603009. ISSN 0196-5212.
15. Recski, András (1984-03-01). "A network theory approach to the rigidity of skeletal structures Part I. Modelling and interconnection". Discrete Applied Mathematics. 7 (3): 313–324. doi:10.1016/0166-218X(84)90007-6. ISSN 0166-218X.
16. Tay, Tiong-Seng (1991). "Henneberg's Method for Bar and Body Frameworks". Structural Topology. ISSN 0226-9171.
17. Fekete, Zsolt; Szegő, László (2007), Bondy, Adrian; Fonlupt, Jean; Fouquet, Jean-Luc; Fournier, Jean-Claude (eds.), "A Note on [k, l]-sparse Graphs", Graph Theory in Paris: Proceedings of a Conference in Memory of Claude Berge, Trends in Mathematics, Basel: Birkhäuser, pp. 169–177, doi:10.1007/978-3-7643-7400-6_13, ISBN 978-3-7643-7400-6, retrieved 2021-01-22
18. Nixon, Anthony (2014-11-01). "A constructive characterisation of circuits in the simple (2,2)-sparsity matroid". European Journal of Combinatorics. 42: 92–106. doi:10.1016/j.ejc.2014.05.009. ISSN 0195-6698. S2CID 1419117.
19. Streinu, Ileana; Theran, Louis (2009-11-01). "Sparse hypergraphs and pebble game algorithms". European Journal of Combinatorics. 30 (8): 1944–1964. doi:10.1016/j.ejc.2008.12.018. ISSN 0195-6698. S2CID 5477763.
20. Henneberg, I. (1908), "Die Graphische Statik der Starren Körper", Encyklopädie der Mathematischen Wissenschaften mit Einschluss ihrer Anwendungen, Wiesbaden: Vieweg+Teubner Verlag, pp. 345–434, doi:10.1007/978-3-663-16021-2_5, ISBN 978-3-663-15450-1, retrieved 2021-01-22
21. Berg, Alex R.; Jordán, Tibor (2003-05-01). "A proof of Connelly's conjecture on 3-connected circuits of the rigidity matroid". Journal of Combinatorial Theory. Series B. 88 (1): 77–97. doi:10.1016/S0095-8956(02)00037-0. ISSN 0095-8956.
22. Nixon, Anthony; Owen, John (2014-12-30). "An inductive construction of $(2,1)$-tight graphs". Contributions to Discrete Mathematics. 9 (2). doi:10.11575/cdm.v9i2.62096. ISSN 1715-0868. S2CID 35739977.
|
Wikipedia
|
Spatial Mathematics: Theory and Practice through Mapping
Spatial Mathematics: Theory and Practice through Mapping is a book on the mathematics that underlies geographic information systems and spatial analysis. It was written by Sandra Arlinghaus and Joseph Kerski, and published in 2013 by the CRC Press.
Topics
The book has 10 chapters, divided into two sections on geodesy and on techniques for visualization of spatial data; each chapter has separate sections on theory and practice.[1] For practical aspects of geographic information systems it uses ArcGIS as its example system.[2]
In the first part of the book, Chapters 1 and 2 covers the geoid, the geographic coordinate system of latitudes and longitudes, and the measurement of distance and location. Chapter 3 concerns data structures for geographic information systems, data formatting based on raster graphics and vector graphics, methods for buffer analysis,[3] and its uses in turning point and line data into area data. Later in the book, but fitting thematically into this part,[1][4] chapter 9 covers map projections.[3]
Moving from geodesy to visualization,[1] chapters 4 and 5 concern the use of color and scale on maps. Chapter 6 concerns the types of data to be visualized, and the types of visualizations that can be made for them. Chapter 7 concerns spatial hierarchies and central place theory, while chapter 8 covers the analysis of spatial distributions in terms of their covariance. Finally, chapter 10 covers network and non-Euclidean data.[1][3]
Additional material on the theoretical concepts behind the topics of the book is provided on a web site, accessed through QR codes included in the book.[1]
Audience and reception
Reviewer reactions to the book were mixed. Several reviewers noted that, for a book with "mathematics" in its title, the book was surprisingly non-mathematical, with both Azadeh Mousavi and Paul Harris calling the title "misleading".[1][4] Harris complains that "the maths is treated quite lightly and superficially".[4] Alfred Stein notes the almost total absence of mathematical equations,[2] and Daniel Griffith similarly notes the lack of proof of its mathematical claims.[5]
Mousavi also writes that, although the book covers a broad selection of topics, it "suffers from lack of necessary depth" and that it is confusingly structured.[1] Sang-Il Lee points to a lack of depth as the book's principal weakness.[3] Stein notes that its reliance on a specific version of ArcGIS makes it difficult to reproduce its examples, especially for international users with different versions or for users of versions updated after its publication.[2] Another weakness highlighted by Griffith is "its limited connection to the existing literature, with its citations far too often being only those works by its authors".[5] Harris sees a missed opportunity in the omission of spatial statistics, movement data, and spatio-temporal data, the design of spatial data structures, and advanced techniques for visualizing geospatial data.[4]
Nevertheless, Mousavi recommends this book as an "introductory text on spatial information science" aimed at practitioners, and commends its use of QR codes and word clouds.[1] Stein praises the book's attempt to bridge mathematics and geography, and its potential use as a first step towards that bridge for practitioners.[2] Harris suggests it "in an introductory and applied context", and in combination with a more conventional textbook on geographic information systems. Lee argues that the overview of fundamental concepts and cross-disciplinary connections forged by the book make it "worth reading by anyone interested in the geospatial sciences".[3] And Griffith concludes that the book is successful in motivating its readers to "explore formal mathematical subject matter that interfaces with geography".[5]
References
1. Mousavi, Azadeh (December 2014), "Review of Spatial Mathematics", Journal of Spatial Information Science, 2014 (9): 125–127, doi:10.5311/josis.2014.9.210
2. Stein, Alfred (December 2014), "Review of Spatial Mathematics", International Journal of Applied Earth Observation and Geoinformation, 33: 342, Bibcode:2014IJAEO..33..342S, doi:10.1016/j.jag.2014.06.014
3. Lee, Sang-Il (September 2014), "Review of Spatial Mathematics", Geographical Analysis, 46 (4): 456–458, doi:10.1111/gean.12066
4. Harris, Paul (July 2016), "Review of Spatial Mathematics", Environment and Planning B: Planning and Design, 43 (5): 963–964, doi:10.1177/0265813515621423, S2CID 63886416
5. Griffith, Daniel A. (April 2014), "Review of Spatial Mathematics", The AAG Review of Books, 2 (2): 65–67, doi:10.1080/2325548x.2014.901863
|
Wikipedia
|
Spatial bifurcation
Spatial bifurcation is a form of bifurcation theory. The classic bifurcation analysis is referred to as an ordinary differential equation system, which is independent on the spatial variables. However, most realistic systems are spatially dependent. In order to understand spatial variable system (partial differential equations), some scientists try to treat with the spatial variable as time and use the AUTO package[1] get a bifurcation results.[2][3]
The weak nonlinear analysis will not provide substantial insights into the nonlinear problem of pattern selection. To understand the pattern selection mechanism, the method of spatial dynamics is used,[4] which was found to be an effective method exploring the multiplicity of steady state solutions.[3][5]
See also
• Spatial ecology
• spatial pattern
References
1. "AUTO". indy.cs.concordia.ca.
2. Wang, R.H., Liu, Q.X., Sun, G.Q., Jin, Z., and Van de Koppel, J. (2010). "Nonlinear dynamic and pattern bifurcations in a model for spatial patterns in young mussel beds". Journal of the Royal Society Interface. 6 (37): 705–18. doi:10.1098/rsif.2008.0439. PMC 2839941. PMID 18986965.{{cite journal}}: CS1 maint: multiple names: authors list (link)
3. A Yochelis; et al. (2008). "The formation of labyrinths, spots and stripe patterns in a biochemical approach to cardiovascular calcification". New J. Phys. 10 055002 (5): 055002. arXiv:0712.3780. Bibcode:2008NJPh...10e5002Y. doi:10.1088/1367-2630/10/5/055002. S2CID 122889339.
4. Champneys A R (1998). "Homoclinic orbits in reversible systems and their applications in mechanics, fluids and optics". Physica D. 112 (1–2): 158–86. Bibcode:1998PhyD..112..158C. CiteSeerX 10.1.1.30.3556. doi:10.1016/S0167-2789(97)00209-1.
5. Edgar Knobloch (2008). "Spatially localized structures in dissipative systems: open problems". Nonlinearity. 21 (4): T45–60. doi:10.1088/0951-7715/21/4/T02. S2CID 43827510.
|
Wikipedia
|
Spatial descriptive statistics
Spatial descriptive statistics is the intersection of spatial statistics and descriptive statistics; these methods are used for a variety of purposes in geography, particularly in quantitative data analyses involving Geographic Information Systems (GIS).
Types of spatial data
The simplest forms of spatial data are gridded data, in which a scalar quantity is measured for each point in a regular grid of points, and point sets, in which a set of coordinates (e.g. of points in the plane) is observed. An example of gridded data would be a satellite image of forest density that has been digitized on a grid. An example of a point set would be the latitude/longitude coordinates of all elm trees in a particular plot of land. More complicated forms of data include marked point sets and spatial time series.
Measures of spatial central tendency
The coordinate-wise mean of a point set is the centroid, which solves the same variational problem in the plane (or higher-dimensional Euclidean space) that the familiar average solves on the real line — that is, the centroid has the smallest possible average squared distance to all points in the set.
Measures of spatial dispersion
Dispersion captures the degree to which points in a point set are separated from each other. For most applications, spatial dispersion should be quantified in a way that is invariant to rotations and reflections. Several simple measures of spatial dispersion for a point set can be defined using the covariance matrix of the coordinates of the points. The trace, the determinant, and the largest eigenvalue of the covariance matrix can be used as measures of spatial dispersion.
A measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors.[1]
Measures of spatial homogeneity
A homogeneous set of points in the plane is a set that is distributed such that approximately the same number of points occurs in any circular region of a given area. A set of points that lacks homogeneity may be spatially clustered at a certain spatial scale. A simple probability model for spatially homogeneous points is the Poisson process in the plane with constant intensity function.
Ripley's K and L functions
Ripley's K and L functions introduced by Brian D. Ripley[2] are closely related descriptive statistics for detecting deviations from spatial homogeneity. The K function (technically its sample-based estimate) is defined as
${\widehat {K}}(t)=\lambda ^{-1}\sum _{i\neq j}{\frac {I(d_{ij}<t)}{n}},$
where dij is the Euclidean distance between the ith and jth points in a data set of n points, t is the search radius, λ is the average density of points (generally estimated as n/A, where A is the area of the region containing all points) and I is the indicator function (1 if its operand is true, 0 otherwise).[3] In 2 dimensions, if the points are approximately homogeneous, ${\widehat {K}}(t)$ should be approximately equal to πt2.
For data analysis, the variance stabilized Ripley K function called the L function is generally used. The sample version of the L function is defined as
${\widehat {L}}(t)=\left({\frac {{\widehat {K}}(t)}{\pi }}\right)^{1/2}.$
For approximately homogeneous data, the L function has expected value t and its variance is approximately constant in t. A common plot is a graph of $t-{\widehat {L}}(t)$ against t, which will approximately follow the horizontal zero-axis with constant dispersion if the data follow a homogeneous Poisson process.
Using Ripley's K function you can determine whether points have a random, dispersed or clustered distribution pattern at a certain scale.[4]
See also
• Geostatistics
• Variogram
• Correlogram
• Kriging
• Cuzick–Edwards test for clustering of sub-populations within clustered populations
• Spatial autocorrelation
References
1. Clark, Philip; Evans, Francis (1954). "Distance to nearest neighbor as a measure of spatial relationships in populations". Ecology. 35 (4): 445–453. doi:10.2307/1931034. JSTOR 1931034.
2. Ripley, B.D. (1976). "The second-order analysis of stationary point processes". Journal of Applied Probability. 13 (2): 255–266. doi:10.2307/3212829. JSTOR 3212829.
3. Dixon, Philip M. (2002). "Ripley's K function" (PDF). In El-Shaarawi, Abdel H.; Piegorsch, Walter W. (eds.). Encyclopedia of Environmetrics. John Wiley & Sons. pp. 1796–1803. ISBN 978-0-471-89997-6. Retrieved April 25, 2014.
4. Wilschut, L.I.; Laudisoit, A.; Hughes, N.K.; Addink, E.A.; de Jong, S.M.; Heesterbeek, J.A.P.; Reijniers, J.; Eagle, S.; Dubyanskiy, V.M.; Begon, M. (2015). "Spatial distribution patterns of plague hosts: point pattern analysis of the burrows of great gerbils in Kazakhstan". Journal of Biogeography. 42 (7): 1281–1292. doi:10.1111/jbi.12534. PMC 4737218. PMID 26877580.
|
Wikipedia
|
Spatial network
A spatial network (sometimes also geometric graph) is a graph in which the vertices or edges are spatial elements associated with geometric objects, i.e., the nodes are located in a space equipped with a certain metric.[1][2] The simplest mathematical realization of spatial network is a lattice or a random geometric graph (see figure in the right), where nodes are distributed uniformly at random over a two-dimensional plane; a pair of nodes are connected if the Euclidean distance is smaller than a given neighborhood radius. Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks and biological neural networks are all examples where the underlying space is relevant and where the graph's topology alone does not contain all the information. Characterizing and understanding the structure, resilience and the evolution of spatial networks is crucial for many different fields ranging from urbanism to epidemiology.
Part of a series on
Network science
• Theory
• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence
Network types
• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip
Graphs
Features
• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list / matrix
• Incidence list / matrix
Types
• Bipartite
• Complete
• Directed
• Hyper
• Labeled
• Multi
• Random
• Weighted
• Metrics
• Algorithms
• Centrality
• Degree
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency
Models
Topology
• Random graph
• Erdős–Rényi
• Barabási–Albert
• Bianconi–Barabási
• Fitness model
• Watts–Strogatz
• Exponential random (ERGM)
• Random geometric (RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Blockmodeling
• Maximum entropy
• Soft configuration
• LFR Benchmark
Dynamics
• Boolean network
• agent based
• Epidemic/SIR
• Lists
• Categories
• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory
Examples
An urban spatial network can be constructed by abstracting intersections as nodes and streets as links, which is referred to as a transportation network.
One might think of the 'space map' as being the negative image of the standard map, with the open space cut out of the background buildings or walls.[3]
Characterizing spatial networks
The following aspects are some of the characteristics to examine a spatial network:[1]
• Planar networks
In many applications, such as railways, roads, and other transportation networks, the network is assumed to be planar. Planar networks build up an important group out of the spatial networks, but not all spatial networks are planar. Indeed, the airline passenger networks is a non-planar example: Many large airports in the world are connected through direct flights.
• The way it is embedded in space
There are examples of networks, which seem to be not "directly" embedded in space. Social networks for instance connect individuals through friendship relations. But in this case, space intervenes in the fact that the connection probability between two individuals usually decreases with the distance between them.
• Voronoi tessellation
A spatial network can be represented by a Voronoi diagram, which is a way of dividing space into a number of regions. The dual graph for a Voronoi diagram corresponds to the Delaunay triangulation for the same set of points. Voronoi tessellations are interesting for spatial networks in the sense that they provide a natural representation model to which one can compare a real world network.
• Mixing space and topology
Examining the topology of the nodes and edges itself is another way to characterize networks. The distribution of degree of the nodes is often considered, regarding the structure of edges it is useful to find the Minimum spanning tree, or the generalization, the Steiner tree and the relative neighborhood graph.
Probability and spatial networks
In "real" world many aspects of networks are not deterministic - randomness plays an important role. For example, new links, representing friendships, in social networks are in a certain manner random. Modelling spatial networks in respect of stochastic operations is consequent. In many cases the spatial Poisson process is used to approximate data sets of processes on spatial networks. Other stochastic aspects of interest are:
• The Poisson line process
• Stochastic geometry: the Erdős–Rényi graph
• Percolation theory
Approach from the theory of space syntax
Another definition of spatial network derives from the theory of space syntax. It can be notoriously difficult to decide what a spatial element should be in complex spaces involving large open areas or many interconnected paths. The originators of space syntax, Bill Hillier and Julienne Hanson use axial lines and convex spaces as the spatial elements. Loosely, an axial line is the 'longest line of sight and access' through open space, and a convex space the 'maximal convex polygon' that can be drawn in open space. Each of these elements is defined by the geometry of the local boundary in different regions of the space map. Decomposition of a space map into a complete set of intersecting axial lines or overlapping convex spaces produces the axial map or overlapping convex map respectively. Algorithmic definitions of these maps exist, and this allows the mapping from an arbitrary shaped space map to a network amenable to graph mathematics to be carried out in a relatively well defined manner. Axial maps are used to analyse urban networks, where the system generally comprises linear segments, whereas convex maps are more often used to analyse building plans where space patterns are often more convexly articulated, however both convex and axial maps may be used in either situation.
Currently, there is a move within the space syntax community to integrate better with geographic information systems (GIS), and much of the software they produce interlinks with commercially available GIS systems.
History
While networks and graphs were already for a long time the subject of many studies in mathematics, physics, mathematical sociology, computer science, spatial networks have been also studied intensively during the 1970s in quantitative geography. Objects of studies in geography are inter alia locations, activities and flows of individuals, but also networks evolving in time and space.[4] Most of the important problems such as the location of nodes of a network, the evolution of transportation networks and their interaction with population and activity density are addressed in these earlier studies. On the other side, many important points still remain unclear, partly because at that time datasets of large networks and larger computer capabilities were lacking. Recently, spatial networks have been the subject of studies in Statistics, to connect probabilities and stochastic processes with networks in the real world.[5]
See also
• Hyperbolic geometric graph
• Spatial network analysis software
• Cascading failure
• Complex network
• Planar graphs
• Percolation theory
• Modularity (networks)
• Random graphs
• Topological graph theory
• Small-world network
• Chemical graph
• Interdependent networks
References
1. Barthelemy, M. (2011). "Spatial Networks". Physics Reports. 499 (1–3): 1–101. arXiv:1010.0302. Bibcode:2011PhR...499....1B. doi:10.1016/j.physrep.2010.11.002. S2CID 4627021.
2. M. Barthelemy, "Morphogenesis of Spatial Networks", Springer (2018).
3. Hillier B, Hanson J, 1984, The social logic of space (Cambridge University Press, Cambridge, UK).
4. P. Haggett and R.J. Chorley. Network analysis in geog- raphy. Edward Arnold, London, 1969.
5. "Spatial Networks". Archived from the original on 2014-01-10. Retrieved 2014-01-10.
• Bandelt, Hans-Jürgen; Chepoi, Victor (2008). "Metric graph theory and geometry: a survey" (PDF). Contemp. Math. Contemporary Mathematics. 453: 49–86. doi:10.1090/conm/453/08795. ISBN 9780821842393. Archived from the original (PDF) on 2006-11-25.
• Pach, János; et al. (2004). Towards a Theory of Geometric Graphs. Contemporary Mathematics, no. 342, American Mathematical Society.
• Pisanski, Tomaž; Randić, Milan (2000). "Bridges between geometry and graph theory". In Gorini, C. A. (ed.). Geometry at Work: Papers in Applied Geometry. Washington, DC: Mathematical Association of America. pp. 174–194. Archived from the original on 2007-09-27.
|
Wikipedia
|
Spearman's rank correlation coefficient
In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter $\rho $ (rho) or as $r_{s}$, is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function.
The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.
Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables.
Spearman's coefficient is appropriate for both continuous and discrete ordinal variables.[1][2] Both Spearman's $\rho $ and Kendall's $\tau $ can be formulated as special cases of a more general correlation coefficient.
Definition and calculation
The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the rank variables.[3]
For a sample of size n, the n raw scores $X_{i},Y_{i}$ are converted to ranks $\operatorname {R} ({X_{i}}),\operatorname {R} ({Y_{i}})$, and $r_{s}$ is computed as
$r_{s}=\rho _{\operatorname {R} (X),\operatorname {R} (Y)}={\frac {\operatorname {cov} (\operatorname {R} (X),\operatorname {R} (Y))}{\sigma _{\operatorname {R} (X)}\sigma _{\operatorname {R} (Y)}}},$
where
$\rho $ denotes the usual Pearson correlation coefficient, but applied to the rank variables,
$\operatorname {cov} (\operatorname {R} (X),\operatorname {R} (Y))$ is the covariance of the rank variables,
$\sigma _{\operatorname {R} (X)}$ and $\sigma _{\operatorname {R} (Y)}$ are the standard deviations of the rank variables.
Only if all n ranks are distinct integers, it can be computed using the popular formula
$r_{s}=1-{\frac {6\sum d_{i}^{2}}{n(n^{2}-1)}},$
where
$d_{i}=\operatorname {R} (X_{i})-\operatorname {R} (Y_{i})$ is the difference between the two ranks of each observation,
n is the number of observations.
[Proof]
Consider a bivariate sample $(x_{i},y_{i}),\,i=1\dots ,n$ with corresponding ranks $(R(X_{i}),R(Y_{i}))=(R_{i},S_{i})$. Then the Spearman correlation coefficient of $x,y$ is
$r_{s}={\frac {{\frac {1}{n}}\sum _{i=1}^{n}R_{i}S_{i}-{\overline {R}}\,{\overline {S}}}{\sigma _{R}\sigma _{S}}},$
where, as usual, ${\overline {R}}=\textstyle {\frac {1}{n}}\textstyle \sum _{i=1}^{n}R_{i}$, ${\overline {S}}=\textstyle {\frac {1}{n}}\textstyle \sum _{i=1}^{n}S_{i}$, $\sigma _{R}^{2}=\textstyle {\frac {1}{n}}\textstyle \sum _{i=1}^{n}(R_{i}-{\overline {R}})^{2}$, and $\sigma _{S}^{2}=\textstyle {\frac {1}{n}}\textstyle \sum _{i=1}^{n}(S_{i}-{\overline {S}})^{2}$,
We shall show that $r_{s}$ can be expressed purely in terms of $d_{i}:=R_{i}-S_{i}$, provided we assume that there be no ties within each sample.
Under this assumption, we have that $R,S$ can be viewed as random variables distributed like a uniformly distributed random variable, $U$, on $\{1,2,\ldots ,n\}$. Hence ${\overline {R}}={\overline {S}}=\mathbb {E} [U]$ and $\sigma _{R}^{2}=\sigma _{S}^{2}=\mathrm {Var} (U)=\mathbb {E} [U^{2}]-\mathbb {E} [U]^{2}$, where $\mathbb {E} [U]=\textstyle {\frac {1}{n}}\textstyle \sum _{i=1}^{n}i=\textstyle {\frac {(n+1)}{2}}$, $\mathbb {E} [U^{2}]=\textstyle {\frac {1}{n}}\textstyle \sum _{i=1}^{n}i^{2}=\textstyle {\frac {(n+1)(2n+1)}{6}}$, and thus $\mathrm {Var} (U)=\textstyle {\frac {(n+1)(2n+1)}{6}}-\left(\textstyle {\frac {(n+1)}{2}}\right)^{2}=\textstyle {\frac {n^{2}-1}{12}}$. (These sums can be computed using the formulas for the triangular number and Square pyramidal number, or basic summation results from discrete mathematics.)
Observe now that
${\begin{aligned}{\frac {1}{n}}\sum _{i=1}^{n}R_{i}S_{i}-{\overline {R}}{\overline {S}}&={\frac {1}{n}}\sum _{i=1}^{n}{\frac {1}{2}}(R_{i}^{2}+S_{i}^{2}-d_{i}^{2})-{\overline {R}}^{2}\\&={\frac {1}{2}}{\frac {1}{n}}\sum _{i=1}^{n}R_{i}^{2}+{\frac {1}{2}}{\frac {1}{n}}\sum _{i=1}^{n}S_{i}^{2}-{\frac {1}{2n}}\sum _{i=1}^{n}d_{i}^{2}-{\overline {R}}^{2}\\&=({\frac {1}{n}}\sum _{i=1}^{n}R_{i}^{2}-{\overline {R}}^{2})-{\frac {1}{2n}}\sum _{i=1}^{n}d_{i}^{2}\\&=\sigma _{R}^{2}-{\frac {1}{2n}}\sum _{i=1}^{n}d_{i}^{2}\\&=\sigma _{R}\sigma _{S}-{\frac {1}{2n}}\sum _{i=1}^{n}d_{i}^{2}\\\end{aligned}}$
Putting this all together thus yields
$r_{s}={\frac {\sigma _{R}\sigma _{S}-{\frac {1}{2n}}\sum _{i=1}^{n}d_{i}^{2}}{\sigma _{R}\sigma _{S}}}=1-{\frac {\sum _{i=1}^{n}d_{i}^{2}}{2n\cdot {\frac {n^{2}-1}{12}}}}=1-{\frac {6\sum _{i=1}^{n}d_{i}^{2}}{n(n^{2}-1)}}.$
Identical values are usually[4] each assigned fractional ranks equal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations.
If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, then $\sigma _{\operatorname {R} (X)}\sigma _{\operatorname {R} (Y)}=\operatorname {Var} {(\operatorname {R} (X))}=\operatorname {Var} {(\operatorname {R} (Y))}=(n^{2}-1)/12$ (calculated according to biased variance). The first equation — normalizing by the standard deviation — may be used even when ranks are normalized to [0, 1] ("relative ranks") because it is insensitive both to translation and linear scaling.
The simplified method should also not be used in cases where the data set is truncated; that is, when the Spearman's correlation coefficient is desired for the top X records (whether by pre-change rank or post-change rank, or both), the user should use the Pearson correlation coefficient formula given above.[5]
Related quantities
Main article: Correlation and dependence
There are several other numerical measures that quantify the extent of statistical dependence between pairs of observations. The most common of these is the Pearson product-moment correlation coefficient, which is a similar correlation method to Spearman's rank, that measures the “linear” relationships between the raw numbers rather than between their ranks.
An alternative name for the Spearman rank correlation is the “grade correlation”;[6] in this, the “rank” of an observation is replaced by the “grade”. In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the “grade” of an observation is proportional to an estimate of the fraction of a population less than a given value, with the half-observation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term “grade correlation” is still in use.[7]
Interpretation
Positive and negative Spearman rank correlations
A positive Spearman correlation coefficient corresponds to an increasing monotonic trend between X and Y.
A negative Spearman correlation coefficient corresponds to a decreasing monotonic trend between X and Y.
The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency for Y to either increase or decrease when X increases. The Spearman correlation increases in magnitude as X and Y become closer to being perfectly monotone functions of each other. When X and Y are perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfectly monotone increasing relationship implies that for any two pairs of data values Xi, Yi and Xj, Yj, that Xi − Xj and Yi − Yj always have the same sign. A perfectly monotone decreasing relationship implies that these differences always have opposite signs.
The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results when X and Y are related by any monotonic function. Contrast this with the Pearson correlation, which only gives a perfect value when X and Y are related by a linear function. The other sense in which the Spearman correlation is nonparametric is that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of X and Y.
Example
In this example, the arbitrary raw data in the table below is used to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week [fictitious values used].
IQ, $X_{i}$ Hours of TV per week, $Y_{i}$
106 7
100 27
86 2
101 50
99 28
103 29
97 20
113 12
112 6
110 17
Firstly, evaluate $d_{i}^{2}$. To do so use the following steps, reflected in the table below.
1. Sort the data by the first column ($X_{i}$). Create a new column $x_{i}$ and assign it the ranked values 1, 2, 3, ..., n.
2. Next, sort the data by the second column ($Y_{i}$). Create a fourth column $y_{i}$ and similarly assign it the ranked values 1, 2, 3, ..., n.
3. Create a fifth column $d_{i}$ to hold the differences between the two rank columns ($x_{i}$ and $y_{i}$).
4. Create one final column $d_{i}^{2}$ to hold the value of column $d_{i}$ squared.
IQ, $X_{i}$ Hours of TV per week, $Y_{i}$ rank $x_{i}$ rank $y_{i}$ $d_{i}$ $d_{i}^{2}$
86 2 1 1 0 0
97 20 2 6 −4 16
99 28 3 8 −5 25
100 27 4 7 −3 9
101 50 5 10 −5 25
103 29 6 9 −3 9
106 7 7 3 4 16
110 17 8 5 3 9
112 6 9 2 7 49
113 12 10 4 6 36
With $d_{i}^{2}$ found, add them to find $\sum d_{i}^{2}=194$. The value of n is 10. These values can now be substituted back into the equation
$\rho =1-{\frac {6\sum d_{i}^{2}}{n(n^{2}-1)}}$
to give
$\rho =1-{\frac {6\times 194}{10(10^{2}-1)}},$
which evaluates to ρ = −29/165 = −0.175757575... with a p-value = 0.627188 (using the t-distribution).
That the value is close to zero shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).
Confidence intervals
Confidence intervals for Spearman's ρ can be easily obtained using the Jackknife Euclidean likelihood approach in de Carvalho and Marques (2012).[8] The confidence interval with level $\alpha $ is based on a Wilks' theorem given in the latter paper, and is given by
$\left\{\theta :{\frac {\{\sum _{i=1}^{n}(Z_{i}-\theta )\}^{2}}{\sum _{i=1}^{n}(Z_{i}-\theta )^{2}}}\leq \mathrm {X} _{1,\alpha }^{2}\right\},$ :{\frac {\{\sum _{i=1}^{n}(Z_{i}-\theta )\}^{2}}{\sum _{i=1}^{n}(Z_{i}-\theta )^{2}}}\leq \mathrm {X} _{1,\alpha }^{2}\right\},}
where $\mathrm {X} _{1,\alpha }^{2}$ is the $\alpha $ quantile of a chi-square distribution with one degree of freedom, and the $Z_{i}$ are jackknife pseudo-values. This approach is implemented in the R package spearmanCI.
Determining significance
One approach to test whether an observed value of ρ is significantly different from zero (r will always maintain −1 ≤ r ≤ 1) is to calculate the probability that it would be greater than or equal to the observed r, given the null hypothesis, by using a permutation test. An advantage of this approach is that it automatically takes into account the number of tied data values in the sample and the way they are treated in computing the rank correlation.
Another approach parallels the use of the Fisher transformation in the case of the Pearson product-moment correlation coefficient. That is, confidence intervals and hypothesis tests relating to the population value ρ can be carried out using the Fisher transformation:
$F(r)={\frac {1}{2}}\ln {\frac {1+r}{1-r}}=\operatorname {artanh} r.$
If F(r) is the Fisher transformation of r, the sample Spearman rank correlation coefficient, and n is the sample size, then
$z={\sqrt {\frac {n-3}{1.06}}}F(r)$
is a z-score for r, which approximately follows a standard normal distribution under the null hypothesis of statistical independence (ρ = 0).[9][10]
One can also test for significance using
$t=r{\sqrt {\frac {n-2}{1-r^{2}}}},$
which is distributed approximately as Student's t-distribution with n − 2 degrees of freedom under the null hypothesis.[11] A justification for this result relies on a permutation argument.[12]
A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page[13] and is usually referred to as Page's trend test for ordered alternatives.
Correspondence analysis based on Spearman's ρ
Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In this way the Pearson correlation coefficient between them is maximized.
There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearman's ρ or Kendall's τ.[14]
Approximating Spearman's ρ from a stream
There are two existing approaches to approximating the Spearman's rank correlation coefficient from streaming data.[15][16] The first approach[15] involves coarsening the joint distribution of $(X,Y)$. For continuous $X,Y$ values: $m_{1},m_{2}$ cutpoints are selected for $X$ and $Y$ respectively, discretizing these random variables. Default cutpoints are added at $-\infty $ and $\infty $. A count matrix of size $(m_{1}+1)\times (m_{2}+1)$, denoted $M$, is then constructed where $M[i,j]$ stores the number of observations that fall into the two-dimensional cell indexed by $(i,j)$. For streaming data, when a new observation arrives, the appropriate $M[i,j]$ element is incremented. The Spearman's rank correlation can then be computed, based on the count matrix $M$, using linear algebra operations (Algorithm 2[15]). Note that for discrete random variables, no discretization procedure is necessary. This method is applicable to stationary streaming data as well as large data sets. For non-stationary streaming data, where the Spearman's rank correlation coefficient may change over time, the same procedure can be applied, but to a moving window of observations. When using a moving window, memory requirements grow linearly with chosen window size.
The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators.[16] These estimators, based on Hermite polynomials, allow sequential estimation of the probability density function and cumulative distribution function in univariate and bivariate cases. Bivariate Hermite series density estimators and univariate Hermite series based cumulative distribution function estimators are plugged into a large sample version of the Spearman's rank correlation coefficient estimator, to give a sequential Spearman's correlation estimator. This estimator is phrased in terms of linear algebra operations for computational efficiency (equation (8) and algorithm 1 and 2[16]). These algorithms are only applicable to continuous random variable data, but have certain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can be computed on non-stationary streams without relying on a moving window. Instead, the Hermite series based estimator uses an exponential weighting scheme to track time-varying Spearman's rank correlation from streaming data, which has constant memory requirements with respect to "effective" moving window size. A software implementation of these Hermite series based algorithms exists [17] and is discussed in Software implementations.
Software implementations
• R's statistics base-package implements the test cor.test(x, y, method = "spearman") in its "stats" package (also cor(x, y, method = "spearman") will work. The package spearmanCI computes confidence intervals. The package hermiter[17] computes fast batch estimates of the Spearman correlation along with sequential estimates (i.e. estimates that are updated in an online/incremental manner as new observations are incorporated).
• Stata implementation: spearman varlist calculates all pairwise correlation coefficients for all variables in varlist.
• MATLAB implementation: [r,p] = corr(x,y,'Type','Spearman') where r is the Spearman's rank correlation coefficient, p is the p-value, and x and y are vectors.[18]
• Python has many different implementation of the spearman correlation statistic: it can be computed with the spearmanr function of the scipy.stats module, as well as with the DataFrame.corr(method='spearman') method from the pandas library, and the corr(x, y, method='spearman') function from the statistical package pingouin.
See also
• Kendall tau rank correlation coefficient
• Chebyshev's sum inequality, rearrangement inequality (These two articles may shed light on the mathematical properties of Spearman's ρ.)
• Distance correlation
• Polychoric correlation
References
1. Scale types.
2. Lehman, Ann (2005). Jmp For Basic Univariate And Multivariate Statistics: A Step-by-step Guide. Cary, NC: SAS Press. p. 123. ISBN 978-1-59047-576-8.
3. Myers, Jerome L.; Well, Arnold D. (2003). Research Design and Statistical Analysis (2nd ed.). Lawrence Erlbaum. pp. 508. ISBN 978-0-8058-4037-7.
4. Dodge, Yadolah (2010). The Concise Encyclopedia of Statistics. Springer-Verlag New York. p. 502. ISBN 978-0-387-31742-7.
5. Al Jaber, Ahmed Odeh; Elayyan, Haifaa Omar (2018). Toward Quality Assurance and Excellence in Higher Education. River Publishers. p. 284. ISBN 978-87-93609-54-9.
6. Yule, G. U.; Kendall, M. G. (1968) [1950]. An Introduction to the Theory of Statistics (14th ed.). Charles Griffin & Co. p. 268.
7. Piantadosi, J.; Howlett, P.; Boland, J. (2007). "Matching the grade correlation coefficient using a copula with maximum disorder". Journal of Industrial and Management Optimization. 3 (2): 305–312. doi:10.3934/jimo.2007.3.305.
8. de Carvalho, M.; Marques, F. (2012). "Jackknife Euclidean likelihood-based inference for Spearman's rho" (PDF). North American Actuarial Journal. 16 (4): 487‒492. doi:10.1080/10920277.2012.10597644. S2CID 55046385.
9. Choi, S. C. (1977). "Tests of Equality of Dependent Correlation Coefficients". Biometrika. 64 (3): 645–647. doi:10.1093/biomet/64.3.645.
10. Fieller, E. C.; Hartley, H. O.; Pearson, E. S. (1957). "Tests for rank correlation coefficients. I". Biometrika. 44 (3–4): 470–481. CiteSeerX 10.1.1.474.9634. doi:10.1093/biomet/44.3-4.470.
11. Press; Vettering; Teukolsky; Flannery (1992). Numerical Recipes in C: The Art of Scientific Computing (2nd ed.). Cambridge University Press. p. 640. ISBN 9780521437202.
12. Kendall, M. G.; Stuart, A. (1973). "Sections 31.19, 31.21". The Advanced Theory of Statistics, Volume 2: Inference and Relationship. Griffin. ISBN 978-0-85264-215-3.
13. Page, E. B. (1963). "Ordered hypotheses for multiple treatments: A significance test for linear ranks". Journal of the American Statistical Association. 58 (301): 216–230. doi:10.2307/2282965. JSTOR 2282965.
14. Kowalczyk, T.; Pleszczyńska, E.; Ruland, F., eds. (2004). Grade Models and Methods for Data Analysis with Applications for the Analysis of Data Populations. Studies in Fuzziness and Soft Computing. Vol. 151. Berlin Heidelberg New York: Springer Verlag. ISBN 978-3-540-21120-4.
15. Xiao, W. (2019). "Novel Online Algorithms for Nonparametric Correlations with Application to Analyze Sensor Data". 2019 IEEE International Conference on Big Data (Big Data). pp. 404–412. doi:10.1109/BigData47090.2019.9006483. ISBN 978-1-7281-0858-2. S2CID 211298570.
16. Stephanou, Michael; Varughese, Melvin (July 2021). "Sequential estimation of Spearman rank correlation using Hermite series estimators". Journal of Multivariate Analysis. 186: 104783. arXiv:2012.06287. doi:10.1016/j.jmva.2021.104783. S2CID 235742634.
17. Stephanou, M. and Varughese, M (2023). "Hermiter: R package for sequential nonparametric estimation". Computational Statistics. arXiv:2111.14091. doi:10.1007/s00180-023-01382-0. S2CID 244715035.{{cite journal}}: CS1 maint: multiple names: authors list (link)
18. "Linear or rank correlation - MATLAB corr". www.mathworks.com.
Further reading
• Corder, G. W. & Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach, Wiley. ISBN 978-1118840313.
• Daniel, Wayne W. (1990). "Spearman rank correlation coefficient". Applied Nonparametric Statistics (2nd ed.). Boston: PWS-Kent. pp. 358–365. ISBN 978-0-534-91976-4.
• Spearman C. (1904). "The proof and measurement of association between two things". American Journal of Psychology. 15 (1): 72–101. doi:10.2307/1412159. JSTOR 1412159.
• Bonett D. G., Wright, T. A. (2000). "Sample size requirements for Pearson, Kendall, and Spearman correlations". Psychometrika. 65: 23–28. doi:10.1007/bf02294183. S2CID 120558581.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Kendall M. G. (1970). Rank correlation methods (4th ed.). London: Griffin. ISBN 978-0-852-6419-96. OCLC 136868.
• Hollander M., Wolfe D. A. (1973). Nonparametric statistical methods. New York: Wiley. ISBN 978-0-471-40635-8. OCLC 520735.
• Caruso J. C., Cliff N. (1997). "Empirical size, coverage, and power of confidence intervals for Spearman's Rho". Educational and Psychological Measurement. 57 (4): 637–654. doi:10.1177/0013164497057004009. S2CID 120481551.
External links
Wikiversity has learning resources about Spearman's rank correlation coefficient
• Table of critical values of ρ for significance with small samples
• Spearman’s Rank Correlation Coefficient – Excel Guide: sample data and formulae for Excel, developed by the Royal Geographical Society.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Wikipedia
|
Specht's theorem
In mathematics, Specht's theorem gives a necessary and sufficient condition for two complex matrices to be unitarily equivalent. It is named after Wilhelm Specht, who proved the theorem in 1940.[1]
Two matrices A and B with complex number entries are said to be unitarily equivalent if there exists a unitary matrix U such that B = U *AU.[2] Two matrices which are unitarily equivalent are also similar. Two similar matrices represent the same linear map, but with respect to a different basis; unitary equivalence corresponds to a change from an orthonormal basis to another orthonormal basis.
If A and B are unitarily equivalent, then tr AA* = tr BB*, where tr denotes the trace (in other words, the Frobenius norm is a unitary invariant). This follows from the cyclic invariance of the trace: if B = U *AU, then tr BB* = tr U *AUU *A*U = tr AUU *A*UU * = tr AA*, where the second equality is cyclic invariance.[3]
Thus, tr AA* = tr BB* is a necessary condition for unitary equivalence, but it is not sufficient. Specht's theorem gives infinitely many necessary conditions which together are also sufficient. The formulation of the theorem uses the following definition. A word in two variables, say x and y, is an expression of the form
$W(x,y)=x^{m_{1}}y^{n_{1}}x^{m_{2}}y^{n_{2}}\cdots x^{m_{p}},$
where m1, n1, m2, n2, …, mp are non-negative integers. The degree of this word is
$m_{1}+n_{1}+m_{2}+n_{2}+\cdots +m_{p}.$
Specht's theorem: Two matrices A and B are unitarily equivalent if and only if tr W(A, A*) = tr W(B, B*) for all words W.[4]
The theorem gives an infinite number of trace identities, but it can be reduced to a finite subset. Let n denote the size of the matrices A and B. For the case n = 2, the following three conditions are sufficient:[5]
$\operatorname {tr} \,A=\operatorname {tr} \,B,\quad \operatorname {tr} \,A^{2}=\operatorname {tr} \,B^{2},\quad {\text{and}}\quad \operatorname {tr} \,AA^{*}=\operatorname {tr} \,BB^{*}.$
For n = 3, the following seven conditions are sufficient:
${\begin{aligned}&\operatorname {tr} \,A=\operatorname {tr} \,B,\quad \operatorname {tr} \,A^{2}=\operatorname {tr} \,B^{2},\quad \operatorname {tr} \,AA^{*}=\operatorname {tr} \,BB^{*},\quad \operatorname {tr} \,A^{3}=\operatorname {tr} \,B^{3},\\&\operatorname {tr} \,A^{2}A^{*}=\operatorname {tr} \,B^{2}B^{*},\quad \operatorname {tr} \,A^{2}(A^{*})^{2}=\operatorname {tr} \,B^{2}(B^{*})^{2},\quad {\text{and}}\quad \operatorname {tr} \,A^{2}(A^{*})^{2}AA^{*}=\operatorname {tr} \,B^{2}(B^{*})^{2}BB^{*}.\end{aligned}}$ [6]
For general n, it suffices to show that tr W(A, A*) = tr W(B, B*) for all words of degree at most
$n{\sqrt {{\frac {2n^{2}}{n-1}}+{\frac {1}{4}}}}+{\frac {n}{2}}-2.$ [7]
It has been conjectured that this can be reduced to an expression linear in n.[8]
Notes
1. Specht (1940)
2. Horn & Johnson (1985), Definition 2.2.1
3. Horn & Johnson (1985), Theorem 2.2.2
4. Horn & Johnson (1985), Theorem 2.2.6
5. Horn & Johnson (1985), Theorem 2.2.8
6. Sibirskiǐ (1976), p. 260, quoted by Đoković & Johnson (2007)
7. Pappacena (1997), Theorem 4.3
8. Freedman, Gupta & Guralnick (1997), p. 160
References
• Đoković, Dragomir Ž.; Johnson, Charles R. (2007), "Unitarily achievable zero patterns and traces of words in A and A*", Linear Algebra and its Applications, 421 (1): 63–68, doi:10.1016/j.laa.2006.03.002, ISSN 0024-3795.
• Freedman, Allen R.; Gupta, Ram Niwas; Guralnick, Robert M. (1997), "Shirshov's theorem and representations of semigroups", Pacific Journal of Mathematics, 181 (3): 159–176, doi:10.2140/pjm.1997.181.159, ISSN 0030-8730.
• Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6.
• Pappacena, Christopher J. (1997), "An upper bound for the length of a finite-dimensional algebra", Journal of Algebra, 197 (2): 535–545, doi:10.1006/jabr.1997.7140, ISSN 0021-8693.
• Sibirskiǐ, K. S. (1976), Algebraic Invariants of Differential Equations and Matrices (in Russian), Izdat. "Štiinca", Kishinev.
• Specht, Wilhelm (1940), "Zur Theorie der Matrizen. II", Jahresbericht der Deutschen Mathematiker-Vereinigung, 50: 19–23, ISSN 0012-0456.
|
Wikipedia
|
Specht module
In mathematics, a Specht module is one of the representations of symmetric groups studied by Wilhelm Specht (1935). They are indexed by partitions, and in characteristic 0 the Specht modules of partitions of n form a complete set of irreducible representations of the symmetric group on n points.
Definition
Fix a partition λ of n and a commutative ring k. The partition determines a Young diagram with n boxes. A Young tableau of shape λ is a way of labelling the boxes of this Young diagram by distinct numbers $1,\dots ,n$.
A tabloid is an equivalence class of Young tableaux where two labellings are equivalent if one is obtained from the other by permuting the entries of each row. For each Young tableau T of shape λ let $\{T\}$ be the corresponding tabloid. The symmetric group on n points acts on the set of Young tableaux of shape λ. Consequently, it acts on tabloids, and on the free k-module V with the tabloids as basis.
Given a Young tableau T of shape λ, let
$E_{T}=\sum _{\sigma \in Q_{T}}\epsilon (\sigma )\{\sigma (T)\}\in V$
where QT is the subgroup of permutations, preserving (as sets) all columns of T and $\epsilon (\sigma )$ is the sign of the permutation σ. The Specht module of the partition λ is the module generated by the elements ET as T runs through all tableaux of shape λ.
The Specht module has a basis of elements ET for T a standard Young tableau.
A gentle introduction to the construction of the Specht module may be found in Section 1 of "Specht Polytopes and Specht Matroids".[1]
Structure
The dimension of the Specht module $V_{\lambda }$ is the number of standard Young tableaux of shape $\lambda $. It is given by the hook length formula.
Over fields of characteristic 0 the Specht modules are irreducible, and form a complete set of irreducible representations of the symmetric group.
A partition is called p-regular (for a prime number p) if it does not have p parts of the same (positive) size. Over fields of characteristic p>0 the Specht modules can be reducible. For p-regular partitions they have a unique irreducible quotient, and these irreducible quotients form a complete set of irreducible representations.
See also
• Garnir relations, a more detailed description of the structure of Specht modules.
References
1. Wiltshire-Gordon, John D.; Woo, Alexander; Zajaczkowska, Magdalena (2017), "Specht Polytopes and Specht Matroids", Combinatorial Algebraic Geometry, Fields Institute Communications, vol. 80, pp. 201–228, arXiv:1701.05277, doi:10.1007/978-1-4939-7486-3_10
• Andersen, Henning Haahr (2001) [1994], "Specht module", Encyclopedia of Mathematics, EMS Press
• James, G. D. (1978), "Chapter 4: Specht modules", The representation theory of the symmetric groups, Lecture Notes in Mathematics, vol. 682, Berlin, New York: Springer-Verlag, p. 13, doi:10.1007/BFb0067712, ISBN 978-3-540-08948-3, MR 0513828
• James, Gordon; Kerber, Adalbert (1981), The representation theory of the symmetric group, Encyclopedia of Mathematics and its Applications, vol. 16, Addison-Wesley Publishing Co., Reading, Mass., ISBN 978-0-201-13515-2, MR 0644144
• Specht, W. (1935), "Die irreduziblen Darstellungen der symmetrischen Gruppe", Mathematische Zeitschrift, 39 (1): 696–711, doi:10.1007/BF01201387, ISSN 0025-5874
|
Wikipedia
|
Euclidean group
In mathematics, a Euclidean group is the group of (Euclidean) isometries of a Euclidean space $\mathbb {E} ^{n}$; that is, the transformations of that space that preserve the Euclidean distance between any two points (also called Euclidean transformations). The group depends only on the dimension n of the space, and is commonly denoted E(n) or ISO(n).
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Lie groups and Lie algebras
Classical groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
Simple Lie groups
Classical
• An
• Bn
• Cn
• Dn
Exceptional
• G2
• F4
• E6
• E7
• E8
Other Lie groups
• Circle
• Lorentz
• Poincaré
• Conformal group
• Diffeomorphism
• Loop
• Euclidean
Lie algebras
• Lie group–Lie algebra correspondence
• Exponential map
• Adjoint representation
• Killing form
• Index
• Simple Lie algebra
• Loop algebra
• Affine Lie algebra
Semisimple Lie algebra
• Dynkin diagrams
• Cartan subalgebra
• Root system
• Weyl group
• Real form
• Complexification
• Split Lie algebra
• Compact Lie algebra
Representation theory
• Lie group representation
• Lie algebra representation
• Representation theory of semisimple Lie algebras
• Representations of classical Lie groups
• Theorem of the highest weight
• Borel–Weil–Bott theorem
Lie groups in physics
• Particle physics and representation theory
• Lorentz group representations
• Poincaré group representations
• Galilean group representations
Scientists
• Sophus Lie
• Henri Poincaré
• Wilhelm Killing
• Élie Cartan
• Hermann Weyl
• Claude Chevalley
• Harish-Chandra
• Armand Borel
• Glossary
• Table of Lie groups
The Euclidean group E(n) comprises all translations, rotations, and reflections of $\mathbb {E} ^{n}$; and arbitrary finite combinations of them. The Euclidean group can be seen as the symmetry group of the space itself, and contains the group of symmetries of any figure (subset) of that space.
A Euclidean isometry can be direct or indirect, depending on whether it preserves the handedness of figures. The direct Euclidean isometries form a subgroup, the special Euclidean group, often denoted SE(n), whose elements are called rigid motions or Euclidean motions. They comprise arbitrary combinations of translations and rotations, but not reflections.
These groups are among the oldest and most studied, at least in the cases of dimension 2 and 3 – implicitly, long before the concept of group was invented.
Overview
Dimensionality
The number of degrees of freedom for E(n) is n(n + 1)/2, which gives 3 in case n = 2, and 6 for n = 3. Of these, n can be attributed to available translational symmetry, and the remaining n(n − 1)/2 to rotational symmetry.
Direct and indirect isometries
The direct isometries (i.e., isometries preserving the handedness of chiral subsets) comprise a subgroup of E(n), called the special Euclidean group and usually denoted by E+(n) or SE(n). They include the translations and rotations, and combinations thereof; including the identity transformation, but excluding any reflections.
The isometries that reverse handedness are called indirect, or opposite. For any fixed indirect isometry R, such as a reflection about some hyperplane, every other indirect isometry can be obtained by the composition of R with some direct isometry. Therefore, the indirect isometries are a coset of E+(n), which can be denoted by E−(n). It follows that the subgroup E+(n) is of index 2 in E(n).
Topology of the group
The natural topology of Euclidean space $\mathbb {E} ^{n}$ implies a topology for the Euclidean group E(n). Namely, a sequence fi of isometries of $\mathbb {E} ^{n}$ ($i\in \mathbb {N} $) is defined to converge if and only if, for any point p of $\mathbb {E} ^{n}$, the sequence of points pi converges.
From this definition it follows that a function $f:[0,1]\to E(n)$ is continuous if and only if, for any point p of $\mathbb {E} ^{n}$, the function $f_{p}:[0,1]\to \mathbb {E} ^{n}$ defined by fp(t) = (f(t))(p) is continuous. Such a function is called a "continuous trajectory" in E(n).
It turns out that the special Euclidean group SE(n) = E+(n) is connected in this topology. That is, given any two direct isometries A and B of $\mathbb {E} ^{n}$, there is a continuous trajectory f in E+(n) such that f(0) = A and f(1) = B. The same is true for the indirect isometries E−(n). On the other hand, the group E(n) as a whole is not connected: there is no continuous trajectory that starts in E+(n) and ends in E−(n).
The continuous trajectories in E(3) play an important role in classical mechanics, because they describe the physically possible movements of a rigid body in three-dimensional space over time. One takes f(0) to be the identity transformation I of $\mathbb {E} ^{3}$, which describes the initial position of the body. The position and orientation of the body at any later time t will be described by the transformation f(t). Since f(0) = I is in E+(3), the same must be true of f(t) for any later time. For that reason, the direct Euclidean isometries are also called "rigid motions".
Lie structure
The Euclidean groups are not only topological groups, they are Lie groups, so that calculus notions can be adapted immediately to this setting.
Relation to the affine group
The Euclidean group E(n) is a subgroup of the affine group for n dimensions, and in such a way as to respect the semidirect product structure of both groups. This gives, a fortiori, two ways of writing elements in an explicit notation. These are:
1. by a pair (A, b), with A an n × n orthogonal matrix, and b a real column vector of size n; or
2. by a single square matrix of size n + 1, as explained for the affine group.
Details for the first representation are given in the next section.
In the terms of Felix Klein's Erlangen programme, we read off from this that Euclidean geometry, the geometry of the Euclidean group of symmetries, is, therefore, a specialisation of affine geometry. All affine theorems apply. The origin of Euclidean geometry allows definition of the notion of distance, from which angle can then be deduced.
Detailed discussion
Subgroup structure, matrix and vector representation
The Euclidean group is a subgroup of the group of affine transformations.
It has as subgroups the translational group T(n), and the orthogonal group O(n). Any element of E(n) is a translation followed by an orthogonal transformation (the linear part of the isometry), in a unique way:
$x\mapsto A(x+b)$
where A is an orthogonal matrix or the same orthogonal transformation followed by a translation:
$x\mapsto Ax+c,$
with c = Ab T(n) is a normal subgroup of E(n): for every translation t and every isometry u, the composition
$u^{-1}tu$
is again a translation.
Together, these facts imply that E(n) is the semidirect product of O(n) extended by T(n), which is written as ${\text{E}}(n)={\text{T}}(n)\rtimes {\text{O}}(n)$. In other words, O(n) is (in the natural way) also the quotient group of E(n) by T(n): Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \text{O}(n) \cong \text{E}(n) / \text{T}(n)}
Now SO(n), the special orthogonal group, is a subgroup of O(n) of index two. Therefore, E(n) has a subgroup E+(n), also of index two, consisting of direct isometries. In these cases the determinant of A is 1.
They are represented as a translation followed by a rotation, rather than a translation followed by some kind of reflection (in dimensions 2 and 3, these are the familiar reflections in a mirror line or plane, which may be taken to include the origin, or in 3D, a rotoreflection).
This relation is commonly written as:
${\text{SO}}(n)\cong {\text{E}}^{+}(n)/{\text{T}}(n)$
or, equivalently:
${\text{E}}^{+}(n)={\text{SO}}(n)\ltimes {\text{T}}(n).$
Subgroups
Types of subgroups of E(n):
Finite groups.
They always have a fixed point. In 3D, for every point there are for every orientation two which are maximal (with respect to inclusion) among the finite groups: Oh and Ih. The groups Ih are even maximal among the groups including the next category.
Countably infinite groups without arbitrarily small translations, rotations, or combinations
i.e., for every point the set of images under the isometries is topologically discrete (e.g., for 1 ≤ m ≤ n a group generated by m translations in independent directions, and possibly a finite point group). This includes lattices. Examples more general than those are the discrete space groups.
Countably infinite groups with arbitrarily small translations, rotations, or combinations
In this case there are points for which the set of images under the isometries is not closed. Examples of such groups are, in 1D, the group generated by a translation of 1 and one of √2, and, in 2D, the group generated by a rotation about the origin by 1 radian.
Non-countable groups, where there are points for which the set of images under the isometries is not closed
(e.g., in 2D all translations in one direction, and all translations by rational distances in another direction).
Non-countable groups, where for all points the set of images under the isometries is closed
e.g.:
• all direct isometries that keep the origin fixed, or more generally, some point (in 3D called the rotation group)
• all isometries that keep the origin fixed, or more generally, some point (the orthogonal group)
• all direct isometries E+(n)
• the whole Euclidean group E(n)
• one of these groups in an m-dimensional subspace combined with a discrete group of isometries in the orthogonal (n−m)-dimensional space
• one of these groups in an m-dimensional subspace combined with another one in the orthogonal (n−m)-dimensional space
Examples in 3D of combinations:
• all rotations about one fixed axis
• ditto combined with reflection in planes through the axis and/or a plane perpendicular to the axis
• ditto combined with discrete translation along the axis or with all isometries along the axis
• a discrete point group, frieze group, or wallpaper group in a plane, combined with any symmetry group in the perpendicular direction
• all isometries which are a combination of a rotation about some axis and a proportional translation along the axis; in general this is combined with k-fold rotational isometries about the same axis (k ≥ 1); the set of images of a point under the isometries is a k-fold helix; in addition there may be a 2-fold rotation about a perpendicularly intersecting axis, and hence a k-fold helix of such axes.
• for any point group: the group of all isometries which are a combination of an isometry in the point group and a translation; for example, in the case of the group generated by inversion in the origin: the group of all translations and inversion in all points; this is the generalized dihedral group of R3, Dih(R3).
Overview of isometries in up to three dimensions
E(1), E(2), and E(3) can be categorized as follows, with degrees of freedom:
Isometries of E(1)
Type of isometry Degrees of freedom Preserves orientation?
Identity0Yes
Translation1Yes
Reflection in a point1No
Isometries of E(2)
Type of isometry Degrees of freedom Preserves orientation?
Identity0Yes
Translation2Yes
Rotation about a point3Yes
Reflection in a line2No
Glide reflection3No
See also: Euclidean plane isometry
Isometries of E(3)
Type of isometry Degrees of freedom Preserves orientation?
Identity0Yes
Translation3Yes
Rotation about an axis5Yes
Screw displacement6Yes
Reflection in a plane3No
Glide plane operation5No
Improper rotation6No
Inversion in a point3No
Chasles' theorem asserts that any element of E+(3) is a screw displacement.
See also 3D isometries that leave the origin fixed, space group, involution.
Commuting isometries
For some isometry pairs composition does not depend on order:
• two translations
• two rotations or screws about the same axis
• reflection with respect to a plane, and a translation in that plane, a rotation about an axis perpendicular to the plane, or a reflection with respect to a perpendicular plane
• glide reflection with respect to a plane, and a translation in that plane
• inversion in a point and any isometry keeping the point fixed
• rotation by 180° about an axis and reflection in a plane through that axis
• rotation by 180° about an axis and rotation by 180° about a perpendicular axis (results in rotation by 180° about the axis perpendicular to both)
• two rotoreflections about the same axis, with respect to the same plane
• two glide reflections with respect to the same plane
Conjugacy classes
The translations by a given distance in any direction form a conjugacy class; the translation group is the union of those for all distances.
In 1D, all reflections are in the same class.
In 2D, rotations by the same angle in either direction are in the same class. Glide reflections with translation by the same distance are in the same class.
In 3D:
• Inversions with respect to all points are in the same class.
• Rotations by the same angle are in the same class.
• Rotations about an axis combined with translation along that axis are in the same class if the angle is the same and the translation distance is the same.
• Reflections in a plane are in the same class
• Reflections in a plane combined with translation in that plane by the same distance are in the same class.
• Rotations about an axis by the same angle not equal to 180°, combined with reflection in a plane perpendicular to that axis, are in the same class.
See also
• Fixed points of isometry groups in Euclidean space
• Euclidean plane isometry
• Poincaré group
• Coordinate rotations and reflections
• Reflection through the origin
• Plane of rotation
References
• Cederberg, Judith N. (2001). A Course in Modern Geometries. pp. 136–164. ISBN 978-0-387-98972-3.
• William Thurston. Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, NJ, 1997. x+311 pp. ISBN 0-691-08304-5
|
Wikipedia
|
Symplectic manifold
In differential geometry, a subject of mathematics, a symplectic manifold is a smooth manifold, $M$, equipped with a closed nondegenerate differential 2-form $\omega $, called the symplectic form. The study of symplectic manifolds is called symplectic geometry or symplectic topology. Symplectic manifolds arise naturally in abstract formulations of classical mechanics and analytical mechanics as the cotangent bundles of manifolds. For example, in the Hamiltonian formulation of classical mechanics, which provides one of the major motivations for the field, the set of all possible configurations of a system is modeled as a manifold, and this manifold's cotangent bundle describes the phase space of the system.
Motivation
Symplectic manifolds arise from classical mechanics; in particular, they are a generalization of the phase space of a closed system.[1] In the same way the Hamilton equations allow one to derive the time evolution of a system from a set of differential equations, the symplectic form should allow one to obtain a vector field describing the flow of the system from the differential $dH$ of a Hamiltonian function $H$.[2] So we require a linear map $TM\rightarrow T^{*}M$ from the tangent manifold $TM$ to the cotangent manifold $T^{*}M$, or equivalently, an element of $T^{*}M\otimes T^{*}M$. Letting $\omega $ denote a section of $T^{*}M\otimes T^{*}M$, the requirement that $\omega $ be non-degenerate ensures that for every differential $dH$ there is a unique corresponding vector field $V_{H}$ such that $dH=\omega (V_{H},\cdot )$. Since one desires the Hamiltonian to be constant along flow lines, one should have $\omega (V_{H},V_{H})=dH(V_{H})=0$, which implies that $\omega $ is alternating and hence a 2-form. Finally, one makes the requirement that $\omega $ should not change under flow lines, i.e. that the Lie derivative of $\omega $ along $V_{H}$ vanishes. Applying Cartan's formula, this amounts to (here $\iota _{X}$ is the interior product):
${\mathcal {L}}_{V_{H}}(\omega )=0\;\Leftrightarrow \;\mathrm {d} (\iota _{V_{H}}\omega )+\iota _{V_{H}}\mathrm {d} \omega =\mathrm {d} (\mathrm {d} \,H)+\mathrm {d} \omega (V_{H})=\mathrm {d} \omega (V_{H})=0$
so that, on repeating this argument for different smooth functions $H$ such that the corresponding $V_{H}$ span the tangent space at each point the argument is applied at, we see that the requirement for the vanishing Lie derivative along flows of $V_{H}$ corresponding to arbitrary smooth $H$ is equivalent to the requirement that ω should be closed.
Definition
A symplectic form on a smooth manifold $M$ is a closed non-degenerate differential 2-form $\omega $.[3][4] Here, non-degenerate means that for every point $p\in M$, the skew-symmetric pairing on the tangent space $T_{p}M$ defined by $\omega $ is non-degenerate. That is to say, if there exists an $X\in T_{p}M$ such that $\omega (X,Y)=0$ for all $Y\in T_{p}M$, then $X=0$. Since in odd dimensions, skew-symmetric matrices are always singular, the requirement that $\omega $ be nondegenerate implies that $M$ has an even dimension.[3][4] The closed condition means that the exterior derivative of $\omega $ vanishes. A symplectic manifold is a pair $(M,\omega )$ where $M$ is a smooth manifold and $\omega $ is a symplectic form. Assigning a symplectic form to $M$ is referred to as giving $M$ a symplectic structure.
Examples
Symplectic vector spaces
Main article: Symplectic vector space
Let $\{v_{1},\ldots ,v_{2n}\}$ be a basis for $\mathbb {R} ^{2n}.$ We define our symplectic form ω on this basis as follows:
$\omega (v_{i},v_{j})={\begin{cases}1&j-i=n{\text{ with }}1\leqslant i\leqslant n\\-1&i-j=n{\text{ with }}1\leqslant j\leqslant n\\0&{\text{otherwise}}\end{cases}}$
In this case the symplectic form reduces to a simple quadratic form. If In denotes the n × n identity matrix then the matrix, Ω, of this quadratic form is given by the 2n × 2n block matrix:
$\Omega ={\begin{pmatrix}0&I_{n}\\-I_{n}&0\end{pmatrix}}.$
Cotangent bundles
Let $Q$ be a smooth manifold of dimension $n$. Then the total space of the cotangent bundle $T^{*}Q$ has a natural symplectic form, called the Poincaré two-form or the canonical symplectic form
$\omega =\sum _{i=1}^{n}dp_{i}\wedge dq^{i}$
Here $(q^{1},\ldots ,q^{n})$ are any local coordinates on $Q$ and $(p_{1},\ldots ,p_{n})$ are fibrewise coordinates with respect to the cotangent vectors $dq^{1},\ldots ,dq^{n}$. Cotangent bundles are the natural phase spaces of classical mechanics. The point of distinguishing upper and lower indexes is driven by the case of the manifold having a metric tensor, as is the case for Riemannian manifolds. Upper and lower indexes transform contra and covariantly under a change of coordinate frames. The phrase "fibrewise coordinates with respect to the cotangent vectors" is meant to convey that the momenta $p_{i}$ are "soldered" to the velocities $dq^{i}$. The soldering is an expression of the idea that velocity and momentum are colinear, in that both move in the same direction, and differ by a scale factor.
Kähler manifolds
A Kähler manifold is a symplectic manifold equipped with a compatible integrable complex structure. They form a particular class of complex manifolds. A large class of examples come from complex algebraic geometry. Any smooth complex projective variety $V\subset \mathbb {CP} ^{n}$ has a symplectic form which is the restriction of the Fubini—Study form on the projective space $\mathbb {CP} ^{n}$.
Almost-complex manifolds
Riemannian manifolds with an $\omega $-compatible almost complex structure are termed almost-complex manifolds. They generalize Kähler manifolds, in that they need not be integrable. That is, they do not necessarily arise from a complex structure on the manifold.
Lagrangian and other submanifolds
There are several natural geometric notions of submanifold of a symplectic manifold $(M,\omega )$:
• Symplectic submanifolds of $M$ (potentially of any even dimension) are submanifolds $S\subset M$ such that $\omega |_{S}$ is a symplectic form on $S$.
• Isotropic submanifolds are submanifolds where the symplectic form restricts to zero, i.e. each tangent space is an isotropic subspace of the ambient manifold's tangent space. Similarly, if each tangent subspace to a submanifold is co-isotropic (the dual of an isotropic subspace), the submanifold is called co-isotropic.
• Lagrangian submanifolds of a symplectic manifold $(M,\omega )$ are submanifolds where the restriction of the symplectic form $\omega $ to $L\subset M$ is vanishing, i.e. $\omega |_{L}=0$ and ${\text{dim }}L={\tfrac {1}{2}}\dim M$. Lagrangian submanifolds are the maximal isotropic submanifolds.
One major example is that the graph of a symplectomorphism in the product symplectic manifold (M × M, ω × −ω) is Lagrangian. Their intersections display rigidity properties not possessed by smooth manifolds; the Arnold conjecture gives the sum of the submanifold's Betti numbers as a lower bound for the number of self intersections of a smooth Lagrangian submanifold, rather than the Euler characteristic in the smooth case.
Examples
Let $\mathbb {R} _{{\textbf {x}},{\textbf {y}}}^{2n}$ have global coordinates labelled $(x_{1},\dotsc ,x_{n},y_{1},\dotsc ,y_{n})$. Then, we can equip $\mathbb {R} _{{\textbf {x}},{\textbf {y}}}^{2n}$ with the canonical symplectic form
$\omega =\mathrm {d} x_{1}\wedge \mathrm {d} y_{1}+\dotsb +\mathrm {d} x_{n}\wedge \mathrm {d} y_{n}.$
There is a standard Lagrangian submanifold given by $\mathbb {R} _{\mathbf {x} }^{n}\to \mathbb {R} _{\mathbf {x} ,\mathbf {y} }^{2n}$. The form $\omega $ vanishes on $\mathbb {R} _{\mathbf {x} }^{n}$ because given any pair of tangent vectors $X=f_{i}({\textbf {x}})\partial _{x_{i}},Y=g_{i}({\textbf {x}})\partial _{x_{i}},$ we have that $\omega (X,Y)=0.$ To elucidate, consider the case $n=1$. Then, $X=f(x)\partial _{x},Y=g(x)\partial _{x},$ and $\omega =\mathrm {d} x\wedge \mathrm {d} y$. Notice that when we expand this out
$\omega (X,Y)=\omega (f(x)\partial _{x},g(x)\partial _{x})={\frac {1}{2}}f(x)g(x)(\mathrm {d} x(\partial _{x})\mathrm {d} y(\partial _{x})-\mathrm {d} y(\partial _{x})\mathrm {d} x(\partial _{x}))$
both terms we have a $\mathrm {d} y(\partial _{x})$ factor, which is 0, by definition.
Example: Cotangent bundle
The cotangent bundle of a manifold is locally modeled on a space similar to the first example. It can be shown that we can glue these affine symplectic forms hence this bundle forms a symplectic manifold. A less trivial example of a Lagrangian submanifold is the zero section of the cotangent bundle of a manifold. For example, let
$X=\{(x,y)\in \mathbb {R} ^{2}:y^{2}-x=0\}.$
Then, we can present $T^{*}X$ as
$T^{*}X=\{(x,y,\mathrm {d} x,\mathrm {d} y)\in \mathbb {R} ^{4}:y^{2}-x=0,2y\mathrm {d} y-\mathrm {d} x=0\}$
where we are treating the symbols $\mathrm {d} x,\mathrm {d} y$ as coordinates of $\mathbb {R} ^{4}=T^{*}\mathbb {R} ^{2}$. We can consider the subset where the coordinates $\mathrm {d} x=0$ and $\mathrm {d} y=0$, giving us the zero section. This example can be repeated for any manifold defined by the vanishing locus of smooth functions $f_{1},\dotsc ,f_{k}$ and their differentials $\mathrm {d} f_{1},\dotsc ,df_{k}$.
Example: Parametric submanifold
Consider the canonical space $\mathbb {R} ^{2n}$ with coordinates $(q_{1},\dotsc ,q_{n},p_{1},\dotsc ,p_{n})$. A parametric submanifold $L$ of $\mathbb {R} ^{2n}$ is one that is parameterized by coordinates $(u_{1},\dotsc ,u_{n})$ such that
$q_{i}=q_{i}(u_{1},\dotsc ,u_{n})\quad p_{i}=p_{i}(u_{1},\dotsc ,u_{n})$
This manifold is a Lagrangian submanifold if the Lagrange bracket $[u_{i},u_{j}]$ vanishes for all $i,j$. That is, it is Lagrangian if
$[u_{i},u_{j}]=\sum _{k}{\frac {\partial q_{k}}{\partial u_{i}}}{\frac {\partial p_{k}}{\partial u_{j}}}-{\frac {\partial p_{k}}{\partial u_{i}}}{\frac {\partial q_{k}}{\partial u_{j}}}=0$
for all $i,j$. This can be seen by expanding
${\frac {\partial }{\partial u_{i}}}={\frac {\partial q_{k}}{\partial u_{i}}}{\frac {\partial }{\partial q_{k}}}+{\frac {\partial p_{k}}{\partial u_{i}}}{\frac {\partial }{\partial p_{k}}}$
in the condition for a Lagrangian submanifold $L$. This is that the symplectic form must vanish on the tangent manifold $TL$; that is, it must vanish for all tangent vectors:
$\omega \left({\frac {\partial }{\partial u_{i}}},{\frac {\partial }{\partial u_{j}}}\right)=0$
for all $i,j$. Simplify the result by making use of the canonical symplectic form on $\mathbb {R} ^{2n}$:
$\omega \left({\frac {\partial }{\partial q_{k}}},{\frac {\partial }{\partial p_{k}}}\right)=-\omega \left({\frac {\partial }{\partial p_{k}}},{\frac {\partial }{\partial q_{k}}}\right)=1$
and all others vanishing.
As local charts on a symplectic manifold take on the canonical form, this example suggests that Lagrangian submanifolds are relatively unconstrained. The classification of symplectic manifolds is done via Floer homology—this is an application of Morse theory to the action functional for maps between Lagrangian submanifolds. In physics, the action describes the time evolution of a physical system; here, it can be taken as the description of the dynamics of branes.
Example: Morse theory
Another useful class of Lagrangian submanifolds occur in Morse theory. Given a Morse function $f:M\to \mathbb {R} $ and for a small enough $\varepsilon $ one can construct a Lagrangian submanifold given by the vanishing locus $\mathbb {V} (\varepsilon \cdot \mathrm {d} f)\subset T^{*}M$. For a generic Morse function we have a Lagrangian intersection given by $M\cap \mathbb {V} (\varepsilon \cdot \mathrm {d} f)={\text{Crit}}(f)$.
See also: symplectic category
Special Lagrangian submanifolds
In the case of Kähler manifolds (or Calabi–Yau manifolds) we can make a choice $\Omega =\Omega _{1}+\mathrm {i} \Omega _{2}$ on $M$ as a holomorphic n-form, where $\Omega _{1}$ is the real part and $\Omega _{2}$ imaginary. A Lagrangian submanifold $L$ is called special if in addition to the above Lagrangian condition the restriction $\Omega _{2}$ to $L$ is vanishing. In other words, the real part $\Omega _{1}$ restricted on $L$ leads the volume form on $L$. The following examples are known as special Lagrangian submanifolds,
1. complex Lagrangian submanifolds of hyperkähler manifolds,
2. fixed points of a real structure of Calabi–Yau manifolds.
The SYZ conjecture deals with the study of special Lagrangian submanifolds in mirror symmetry; see (Hitchin 1999).
The Thomas–Yau conjecture predicts that the existence of a special Lagrangian submanifolds on Calabi–Yau manifolds in Hamiltonian isotopy classes of Lagrangians is equivalent to stability with respect to a stability condition on the Fukaya category of the manifold.
Lagrangian fibration
A Lagrangian fibration of a symplectic manifold M is a fibration where all of the fibres are Lagrangian submanifolds. Since M is even-dimensional we can take local coordinates (p1,…,pn, q1,…,qn), and by Darboux's theorem the symplectic form ω can be, at least locally, written as ω = ∑ dpk ∧ dqk, where d denotes the exterior derivative and ∧ denotes the exterior product. This form is called the Poincaré two-form or the canonical two-form. Using this set-up we can locally think of M as being the cotangent bundle $T^{*}\mathbb {R} ^{n},$ and the Lagrangian fibration as the trivial fibration $\pi :T^{*}\mathbb {R} ^{n}\to \mathbb {R} ^{n}.$ This is the canonical picture.
Lagrangian mapping
Let L be a Lagrangian submanifold of a symplectic manifold (K,ω) given by an immersion i : L ↪ K (i is called a Lagrangian immersion). Let π : K ↠ B give a Lagrangian fibration of K. The composite (π ∘ i) : L ↪ K ↠ B is a Lagrangian mapping. The critical value set of π ∘ i is called a caustic.
Two Lagrangian maps (π1 ∘ i1) : L1 ↪ K1 ↠ B1 and (π2 ∘ i2) : L2 ↪ K2 ↠ B2 are called Lagrangian equivalent if there exist diffeomorphisms σ, τ and ν such that both sides of the diagram given on the right commute, and τ preserves the symplectic form.[4] Symbolically:
$\tau \circ i_{1}=i_{2}\circ \sigma ,\ \nu \circ \pi _{1}=\pi _{2}\circ \tau ,\ \tau ^{*}\omega _{2}=\omega _{1}\,,$
where τ∗ω2 denotes the pull back of ω2 by τ.
Special cases and generalizations
• A symplectic manifold $(M,\omega )$ is exact if the symplectic form $\omega $ is exact. For example, the cotangent bundle of a smooth manifold is an exact symplectic manifold. The canonical symplectic form is exact.
• A symplectic manifold endowed with a metric that is compatible with the symplectic form is an almost Kähler manifold in the sense that the tangent bundle has an almost complex structure, but this need not be integrable.
• Symplectic manifolds are special cases of a Poisson manifold.
• A multisymplectic manifold of degree k is a manifold equipped with a closed nondegenerate k-form.[5]
• A polysymplectic manifold is a Legendre bundle provided with a polysymplectic tangent-valued $(n+2)$-form; it is utilized in Hamiltonian field theory.[6]
See also
• Almost symplectic manifold – differentiable manifold equipped with a nondegenerate (but not necessarily closed) 2‐formPages displaying wikidata descriptions as a fallback
• Contact manifold – branch of mathematicsPages displaying wikidata descriptions as a fallback—an odd-dimensional counterpart of the symplectic manifold.
• Covariant Hamiltonian field theory – Formalism in classical field theory based on Hamiltonian mechanicsPages displaying short descriptions of redirect targets
• Fedosov manifold
• Poisson bracket – Operation in Hamiltonian mechanics
• Symplectic group – Mathematical group
• Symplectic matrix
• Symplectic topology – Branch of differential geometry and differential topologyPages displaying short descriptions of redirect targets
• Symplectic vector space – vector space equipped with an alternating nondegenerate bilinear formPages displaying wikidata descriptions as a fallback
• Symplectomorphism – Isomorphism of symplectic manifolds
• Tautological one-form – canonical differential form defined on the cotangent bundle of a smooth manifoldPages displaying wikidata descriptions as a fallback
• Wirtinger inequality (2-forms) – inequality applicable to 2-formsPages displaying wikidata descriptions as a fallback
Citations
1. Webster, Ben (9 January 2012). "What is a symplectic manifold, really?".
2. Cohn, Henry. "Why symplectic geometry is the natural setting for classical mechanics".
3. de Gosson, Maurice (2006). Symplectic Geometry and Quantum Mechanics. Basel: Birkhäuser Verlag. p. 10. ISBN 3-7643-7574-4.
4. Arnold, V. I.; Varchenko, A. N.; Gusein-Zade, S. M. (1985). The Classification of Critical Points, Caustics and Wave Fronts: Singularities of Differentiable Maps, Vol 1. Birkhäuser. ISBN 0-8176-3187-9.
5. Cantrijn, F.; Ibort, L. A.; de León, M. (1999). "On the Geometry of Multisymplectic Manifolds". J. Austral. Math. Soc. Ser. A. 66 (3): 303–330. doi:10.1017/S1446788700036636.
6. Giachetta, G.; Mangiarotti, L.; Sardanashvily, G. (1999). "Covariant Hamiltonian equations for field theory". Journal of Physics. A32 (38): 6629–6642. arXiv:hep-th/9904062. Bibcode:1999JPhA...32.6629G. doi:10.1088/0305-4470/32/38/302. S2CID 204899025.
General and cited references
• McDuff, Dusa; Salamon, D. (1998). Introduction to Symplectic Topology. Oxford Mathematical Monographs. ISBN 0-19-850451-9.
• Auroux, Denis. "Seminar on Mirror Symmetry".
• Meinrenken, Eckhard. "Symplectic Geometry" (PDF).
• Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. See Section 3.2. ISBN 0-8053-0102-X.
• de Gosson, Maurice A. (2006). Symplectic Geometry and Quantum Mechanics. Basel: Birkhäuser Verlag. ISBN 3-7643-7574-4.
• Alan Weinstein (1971). "Symplectic manifolds and their lagrangian submanifolds". Advances in Mathematics. 6 (3): 329–46. doi:10.1016/0001-8708(71)90020-X.
• Arnold, V. I. (1990). "Ch.1, Symplectic geometry". Singularities of Caustics and Wave Fronts. Mathematics and Its Applications. Vol. 62. Dordrecht: Springer Netherlands. doi:10.1007/978-94-011-3330-2. ISBN 978-1-4020-0333-2. OCLC 22509804.
Further reading
• Dunin-Barkowski, Petr (2022). "Symplectic duality for topological recursion". arXiv:2206.14792 [math-ph].
• "How to find Lagrangian Submanifolds". Stack Exchange. December 17, 2014.
• Lumist, Ü. (2001) [1994], "Symplectic Structure", Encyclopedia of Mathematics, EMS Press
• Sardanashvily, G. (2009). "Fibre bundles, jet manifolds and Lagrangian theory". Lectures for Theoreticians. arXiv:0908.1886.
• McDuff, D. (November 1998). "Symplectic Structures—A New Approach to Geometry" (PDF). Notices of the AMS.
• Hitchin, Nigel (1999). "Lectures on Special Lagrangian Submanifolds". arXiv:math/9907034.
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
|
Wikipedia
|
Special abelian subgroup
In mathematical group theory, a subgroup of a group is termed a special abelian subgroup or SA-subgroup if the centralizer of any nonidentity element in the subgroup is precisely the subgroup(Curtis & Reiner 1981, p.354). Equivalently, an SA subgroup is a centrally closed abelian subgroup.
• Any SA subgroup is a maximal abelian subgroup, that is, it is not properly contained in another abelian subgroup.
• For a CA group, the SA subgroups are precisely the maximal abelian subgroups.
SA subgroups are known for certain characters associated with them termed exceptional characters.
References
• Curtis, Charles W.; Reiner, Irving (1981), Methods of representation theory. Vol. I, New York: John Wiley & Sons, ISBN 978-0-471-18994-7, MR 0632548
|
Wikipedia
|
Special cases of Apollonius' problem
In Euclidean geometry, Apollonius' problem is to construct all the circles that are tangent to three given circles. Special cases of Apollonius' problem are those in which at least one of the given circles is a point or line, i.e., is a circle of zero or infinite radius. The nine types of such limiting cases of Apollonius' problem are to construct the circles tangent to:
1. three points (denoted PPP, generally 1 solution)
2. three lines (denoted LLL, generally 4 solutions)
3. one line and two points (denoted LPP, generally 2 solutions)
4. two lines and a point (denoted LLP, generally 2 solutions)
5. one circle and two points (denoted CPP, generally 2 solutions)
6. one circle, one line, and a point (denoted CLP, generally 4 solutions)
7. two circles and a point (denoted CCP, generally 4 solutions)
8. one circle and two lines (denoted CLL, generally 8 solutions)
9. two circles and a line (denoted CCL, generally 8 solutions)
In a different type of limiting case, the three given geometrical elements may have a special arrangement, such as constructing a circle tangent to two parallel lines and one circle.
Historical introduction
Like most branches of mathematics, Euclidean geometry is concerned with proofs of general truths from a minimum of postulates. For example, a simple proof would show that at least two angles of an isosceles triangle are equal. One important type of proof in Euclidean geometry is to show that a geometrical object can be constructed with a compass and an unmarked straightedge; an object can be constructed if and only if (iff) (something about no higher than square roots are taken). Therefore, it is important to determine whether an object can be constructed with compass and straightedge and, if so, how it may be constructed.
Euclid developed numerous constructions with compass and straightedge. Examples include: regular polygons such as the pentagon and hexagon, a line parallel to another that passes through a given point, etc. Many rose windows in Gothic Cathedrals, as well as some Celtic knots, can be designed using only Euclidean constructions. However, some geometrical constructions are not possible with those tools, including the heptagon and trisecting an angle.
Apollonius contributed many constructions, namely, finding the circles that are tangent to three geometrical elements simultaneously, where the "elements" may be a point, line or circle.
Rules of Euclidean constructions
In Euclidean constructions, five operations are allowed:
1. Draw a line through two points
2. Draw a circle through a point with a given center
3. Find the intersection point of two lines
4. Find the intersection points of two circles
5. Find the intersection points of a line and a circle
The initial elements in a geometric construction are called the "givens", such as a given point, a given line or a given circle.
Example 1: Perpendicular bisector
To construct the perpendicular bisector of the line segment between two points requires two circles, each centered on an endpoint and passing through the other endpoint (operation 2). The intersection points of these two circles (operation 4) are equidistant from the endpoints. The line through them (operation 1) is the perpendicular bisector.
Example 2: Angle bisector
To generate the line that bisects the angle between two given rays requires a circle of arbitrary radius centered on the intersection point P of the two lines (2). The intersection points of this circle with the two given lines (5) are T1 and T2. Two circles of the same radius, centered on T1 and T2, intersect at points P and Q. The line through P and Q (1) is an angle bisector. Rays have one angle bisector; lines have two, perpendicular to one another.
Preliminary results
A few basic results are helpful in solving special cases of Apollonius' problem. Note that a line and a point can be thought of as circles of infinitely large and infinitely small radius, respectively.
• A circle is tangent to a point if it passes through the point, and tangent to a line if they intersect at a single point P or if the line is perpendicular to a radius drawn from the circle's center to P.
• Circles tangent to two given points must lie on the perpendicular bisector.
• Circles tangent to two given lines must lie on the angle bisector.
• Tangent line to a circle from a given point draw semicircle centered on the midpoint between the center of the circle and the given point.
• Power of a point and the harmonic mean
• The radical axis of two circles is the set of points of equal tangents, or more generally, equal power.
• Circles may be inverted into lines and circles into circles.
• If two circles are internally tangent, they remain so if their radii are increased or decreased by the same amount. Conversely, if two circles are externally tangent, they remain so if their radii are changed by the same amount in opposite directions, one increasing and the other decreasing.
Types of solutions
Type 1: Three points
PPP problems generally have a single solution. As shown above, if a circle passes through two given points P1 and P2, its center must lie somewhere on the perpendicular bisector line of the two points. Therefore, if the solution circle passes through three given points P1, P2 and P3, its center must lie on the perpendicular bisectors of ${\overline {\mathbf {P} _{1}\mathbf {P} _{2}}}$, ${\overline {\mathbf {P} _{1}\mathbf {P} _{3}}}$ and ${\overline {\mathbf {P} _{2}\mathbf {P} _{3}}}$. At least two of these bisectors must intersect, and their intersection point is the center of the solution circle. The radius of the solution circle is the distance from that center to any one of the three given points.
Type 2: Three lines
LLL problems generally offer 4 solutions. As shown above, if a circle is tangent to two given lines, its center must lie on one of the two lines that bisect the angle between the two given lines. Therefore, if a circle is tangent to three given lines L1, L2, and L3, its center C must be located at the intersection of the bisecting lines of the three given lines. In general, there are four such points, giving four different solutions for the LLL Apollonius problem. The radius of each solution is determined by finding a point of tangency T, which may be done by choosing one of the three intersection points P between the given lines; and drawing a circle centered on the midpoint of C and P of diameter equal to the distance between C and P. The intersections of that circle with the intersecting given lines are the two points of tangency.
Type 3: One point, two lines
PLL problems generally have 2 solutions. As shown above, if a circle is tangent to two given lines, its center must lie on one of the two lines that bisect the angle between the two given lines. By symmetry, if such a circle passes through a given point P, it must also pass through a point Q that is the "mirror image" of P about the angle bisector. The two solution circles pass through both P and Q, and their radical axis is the line connecting those two points. Consider point G at which the radical axis intersects one of the two given lines. Since, every point on the radical axis has the same power relative to each circle, the distances ${\overline {\mathbf {GT} _{1}}}$ and ${\overline {\mathbf {GT} _{2}}}$ to the solution tangent points T1 and T2, are equal to each other and to the product
${\overline {\mathbf {GP} }}\cdot {\overline {\mathbf {GQ} }}={\overline {\mathbf {GT} _{1}}}\cdot {\overline {\mathbf {GT} _{1}}}={\overline {\mathbf {GT} _{2}}}\cdot {\overline {\mathbf {GT} _{2}}}$
Thus, the distances ${\overline {\mathbf {GT} _{1-2}}}$ are both equal to the geometric mean of ${\overline {\mathbf {GP} }}$ and ${\overline {\mathbf {GQ} }}$. From G and this distance, the tangent points T1 and T2 can be found. Then, the two solution circles are the circles that pass through the three points (P, Q, T1) and (P, Q, T2), respectively.
Type 4: Two points, one line
PPL problems generally have 2 solutions. If a line m drawn through the given points P and Q is parallel to the given line l, the tangent point T of the circle with l is located at the intersection of the perpendicular bisector of ${\overline {PQ}}$ with l. In that case, the sole solution circle is the circle that passes through the three points P, Q and T.
If the line m is not parallel to the given line l, then it intersects l at a point G. By the power of a point theorem, the distance from G to a tangent point T must equal the geometric mean
${\overline {\mathbf {GT} }}\cdot {\overline {\mathbf {GT} }}={\overline {\mathbf {GP} }}\cdot {\overline {\mathbf {GQ} }}$
Two points on the given line L are located at a distance ${\overline {\mathbf {GT} }}$ from the point G, which may be denoted as T1 and T2. The two solution circles are the circles that pass through the three points (P, Q, T1) and (P, Q, T2), respectively.
Compass and straightedge construction
The two circles in the Two points, one line problem where the line through P and Q is not parallel to the given line l, can be constructed with compass and straightedge by:
• Draw the line m through the given points P and Q .
• The point G is where the lines l and m intersect
• Draw circle C that has PQ as diameter.
• Draw one of the tangents from G to circle C.
• point A is where the tangent and the circle touch.
• Draw circle D with center G through A.
• Circle D cuts line l at the points T1 and T2.
• One of the required circles is the circle through P, Q and T1.
• The other circle is the circle through P, Q and T2.
The fastest construction (if intersections of l with both (PQ) and the central perpendicular to [PQ] are available; based on Gergonne’s approach).
• Draw a line m through P and Q intersecting l at G.
• Draw a perpendicular n through the middle of [PQ] intersecting l at O.
• Draw a circle w centered at O with radius |OP|=|OQ|.
• Draw a circle W with [OG] as a diameter intersecting w at M1 and M2.
• Draw a circle v centered at G with radius |GM1|=|GM2| intersecting l at T1 and T2.
• The circles passing through P, Q, T1 and P, Q, T2 are solutions.
The universal construction (if intersections of l with either (PQ) or the central perpendicular to [PQ] are unavailable or do not exist).
• Draw a perpendicular n through the middle of [PQ] (point R).
• Draw a perpendicular k to l through P or Q intersecting l at K.
• Draw a circle w centered at R with radius |RK|.
• Draw two lines n1 and n2 passing through P and Q parallel to n and intersecting w at points A1, A2 and B1, B2, respectively.
• Draw two lines (A1B1) and (A2B2) intersecting l at T1 and T2, respectively.
• The circles passing through P, Q, T1 and P, Q, T2 are solutions.
Type 5: One circle, two points
CPP problems generally have 2 solutions. Consider a circle centered on one given point P that passes through the second point, Q. Since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. The same inversion transforms Q into itself, and (in general) the given circle C into another circle c. Thus, the problem becomes that of finding a solution line that passes through Q and is tangent to c, which was solved above; there are two such lines. Re-inversion produces the two corresponding solution circles of the original problem.
Type 6: One circle, one line, one point
CLP problems generally have 4 solutions. The solution of this special case is similar to that of the CPP Apollonius solution. Draw a circle centered on the given point P; since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. In general, the same inversion transforms the given line L and given circle C into two new circles, c1 and c2. Thus, the problem becomes that of finding a solution line tangent to the two inverted circles, which was solved above. There are four such lines, and re-inversion transforms them into the four solution circles of the Apollonius problem.
Type 7: Two circles, one point
CCP problems generally have 4 solutions. The solution of this special case is similar to that of CPP. Draw a circle centered on the given point P; since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. In general, the same inversion transforms the given circle C1 and C2 into two new circles, c1 and c2. Thus, the problem becomes that of finding a solution line tangent to the two inverted circles, which was solved above. There are four such lines, and re-inversion transforms them into the four solution circles of the original Apollonius problem.
Type 8: One circle, two lines
CLL problems generally have 8 solutions. This special case is solved most easily using scaling. The given circle is shrunk to a point, and the radius of the solution circle is either decreased by the same amount (if an internally tangent solution) or increased (if an externally tangent circle). Depending on whether the solution circle is increased or decreased in radii, the two given lines are displaced parallel to themselves by the same amount, depending on which quadrant the center of the solution circle falls. This shrinking of the given circle to a point reduces the problem to the PLL problem, solved above. In general, there are two such solutions per quadrant, giving eight solutions in all.
Type 9: Two circles, one line
CCL problems generally have 8 solutions. The solution of this special case is similar to CLL. The smaller circle is shrunk to a point, while adjusting the radii of the larger given circle and any solution circle, and displacing the line parallel to itself, according to whether they are internally or externally tangent to the smaller circle. This reduces the problem to CLP. Each CLP problem has four solutions, as described above, and there are two such problems, depending on whether the solution circle is internally or externally tangent to the smaller circle.
Special cases with no solutions
An Apollonius problem is impossible if the given circles are nested, i.e., if one circle is completely enclosed within a particular circle and the remaining circle is completely excluded. This follows because any solution circle would have to cross over the middle circle to move from its tangency to the inner circle to its tangency with the outer circle. This general result has several special cases when the given circles are shrunk to points (zero radius) or expanded to straight lines (infinite radius). For example, the CCL problem has zero solutions if the two circles are on opposite sides of the line since, in that case, any solution circle would have to cross the given line non-tangentially to go from the tangent point of one circle to that of the other.
See also
• Problem of Apollonius
• Compass and straightedge constructions
References
• Altshiller-Court N (1952). College Geometry: An Introduction to the Modern Geometry of the Triangle and the Circle (2nd edition, revised and enlarged ed.). New York: Barnes and Noble. pp. 222–227.
• Benjamin Alvord (1855) Tangencies of Circles and of Spheres, Smithsonian Contributions, volume 8, from Google Books.
• Bruen A, Fisher JC, Wilker JB (1983). "Apollonius by Inversion". Mathematics Magazine. 56 (2): 97–103. doi:10.2307/2690380. JSTOR 2690380.
• Hartshorne R (2000). Geometry:Euclid and beyond. New York: Springer Verlag. pp. 346–355. ISBN 0-387-98650-2.
|
Wikipedia
|
Special conformal transformation
In projective geometry, a special conformal transformation is a linear fractional transformation that is not an affine transformation. Thus the generation of a special conformal transformation involves use of multiplicative inversion, which is the generator of linear fractional transformations that is not affine.
In mathematical physics, certain conformal maps known as spherical wave transformations are special conformal transformations.
Vector presentation
A special conformal transformation can be written[1]
$x'^{\mu }={\frac {x^{\mu }-b^{\mu }x^{2}}{1-2b\cdot x+b^{2}x^{2}}}={\frac {x^{2}}{|x-bx^{2}|^{2}}}(x^{\mu }-b^{\mu }x^{2})\,.$
It is a composition of an inversion (xμ → xμ/x2 = yμ), a translation (yμ → yμ − bμ = zμ), and another inversion (zμ → zμ/z2 = x′μ)
${\frac {x'^{\mu }}{x'^{2}}}={\frac {x^{\mu }}{x^{2}}}-b^{\mu }\,.$
Its infinitesimal generator is
$K_{\mu }=-i(2x_{\mu }x^{\nu }\partial _{\nu }-x^{2}\partial _{\mu })\,.$
Special conformal transformations have been used to study the force field of an electric charge in hyperbolic motion.[2]
Projective presentation
The inversion can also be taken[3] to be multiplicative inversion of biquaternions B. The complex algebra B can be extended to P(B) through the projective line over a ring. Homographies on P(B) include translations:
$U(q:1){\begin{pmatrix}1&0\\t&1\end{pmatrix}}=U(q+t:1).$
The homography group G(B) includes of translations at infinity with respect to the embedding q → U(q:1);
${\begin{pmatrix}0&1\\1&0\end{pmatrix}}{\begin{pmatrix}1&0\\t&1\end{pmatrix}}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}={\begin{pmatrix}1&t\\0&1\end{pmatrix}}.$
The matrix describes the action of a special conformal transformation.[4]
Group property
The translations form a subgroup of the linear fractional group acting on a projective line. There are two embeddings into the projective line of homogeneous coordinates: z → [z:1] and z → [1:z]. An addition operation corresponds to a translation in the first embedding. The translations to the second embedding are special conformal transformations, forming translations at infinity. Addition by these transformations reciprocates the terms before addition, then returns the result by another reciprocation. This operation is called the parallel operation. In the case of the complex plane the parallel operator forms an addition operation in an alternative field using infinity but excluding zero. The translations at infinity thus form another subgroup of the homography group on the projective line.
References
1. Di Francesco; Mathieu, Sénéchal (1997). Conformal field theory. Graduate texts in contemporary physics. Springer. pp. 97–98. ISBN 978-0-387-94785-3.
2. Galeriu, Cǎlin (2019) "Electric charge in hyperbolic motion: the special conformal solution", European Journal of Physics 40(6) doi:10.1088/1361-6404/ab3df6
3. Arthur Conway (1911) "On the application of quaternions to some recent developments of electrical theory", Proceedings of the Royal Irish Academy 29:1–9, particularly page 9
4. Associative Composition Algebra/Homographies at Wikibooks
|
Wikipedia
|
Special functions
Special functions are particular mathematical functions that have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, or other applications.
The term is defined by consensus, and thus lacks a general formal definition, but the list of mathematical functions contains functions that are commonly accepted as special.
Tables of special functions
Many special functions appear as solutions of differential equations or integrals of elementary functions. Therefore, tables of integrals[1] usually include descriptions of special functions, and tables of special functions[2] include most important integrals; at least, the integral representation of special functions. Because symmetries of differential equations are essential to both physics and mathematics, the theory of special functions is closely related to the theory of Lie groups and Lie algebras, as well as certain topics in mathematical physics.
Symbolic computation engines usually recognize the majority of special functions.
Notations used for special functions
Functions with established international notations are the sine ($\sin $), cosine ($\cos $), exponential function ($\exp $), and error function ($\operatorname {erf} $ or $\operatorname {erfc} $).
Some special functions have several notations:
• The natural logarithm may be denoted $\ln $, $\log $, $\log _{e}$, or $\operatorname {Log} $ depending on the context.
• The tangent function may be denoted $\tan $, $\operatorname {Tan} $, or $\operatorname {tg} $ ($\operatorname {tg} $ is used in several European languages).
• Arctangent may be denoted $\arctan $, $\operatorname {atan} $, $\operatorname {arctg} $, or $\tan ^{-1}$.
• The Bessel functions may be denoted
• $J_{n}(x),$
• $\operatorname {besselj} (n,x),$
• ${\rm {BesselJ}}[n,x].$
Subscripts are often used to indicate arguments, typically integers. In a few cases, the semicolon (;) or even backslash (\) is used as a separator. In this case, the translation to algorithmic languages admits ambiguity and may lead to confusion.
Superscripts may indicate not only exponentiation, but modification of a function. Examples (particularly with trigonometric and hyperbolic functions) include:
• $\cos ^{3}(x)$ usually means $(\cos(x))^{3}$
• $\cos ^{2}(x)$ is typically $(\cos(x))^{2}$, but never $\cos(\cos(x))$
• $\cos ^{-1}(x)$ usually means $\arccos(x)$, not $(\cos(x))^{-1}$; this one typically causes the most confusion, since the meaning of this superscript is inconsistent with the others.
Evaluation of special functions
Most special functions are considered as a function of a complex variable. They are analytic; the singularities and cuts are described; the differential and integral representations are known and the expansion to the Taylor series or asymptotic series are available. In addition, sometimes there exist relations with other special functions; a complicated special function can be expressed in terms of simpler functions. Various representations can be used for the evaluation; the simplest way to evaluate a function is to expand it into a Taylor series. However, such representation may converge slowly or not at all. In algorithmic languages, rational approximations are typically used, although they may behave badly in the case of complex argument(s).
History of special functions
Classical theory
While trigonometry and exponential functions were systematized and unified by the eighteenth century, the search for a complete and unified theory of special functions has continued since the nineteenth century. The high point of special function theory in 1800–1900 was the theory of elliptic functions; treatises that were essentially complete, such as that of Tannery and Molk,[3] expounded all the basic identities of the theory using techniques from analytic function theory (based on complex analysis). The end of the century also saw a very detailed discussion of spherical harmonics.
Changing and fixed motivations
While pure mathematicians sought a broad theory deriving as many as possible of the known special functions from a single principle, for a long time the special functions were the province of applied mathematics. Applications to the physical sciences and engineering determined the relative importance of functions. Before electronic computation, the importance of a special function was affirmed by the laborious computation of extended tables of values for ready look-up, as for the familiar logarithm tables. (Babbage's difference engine was an attempt to compute such tables.) For this purpose, the main techniques are:
• numerical analysis, the discovery of infinite series or other analytical expressions allowing rapid calculation; and
• reduction of as many functions as possible to the given function.
More theoretical questions include: asymptotic analysis; analytic continuation and monodromy in the complex plane; and symmetry principles and other structural equations.
Twentieth century
The twentieth century saw several waves of interest in special function theory. The classic Whittaker and Watson (1902) textbook[4] sought to unify the theory using complex analysis; the G. N. Watson tome A Treatise on the Theory of Bessel Functions pushed the techniques as far as possible for one important type, including asymptotic results.
The later Bateman Manuscript Project, under the editorship of Arthur Erdélyi, attempted to be encyclopedic, and came around the time when electronic computation was coming to the fore and tabulation ceased to be the main issue.
Contemporary theories
The modern theory of orthogonal polynomials is of a definite but limited scope. Hypergeometric series, observed by Felix Klein to be important in astronomy and mathematical physics,[5] became an intricate theory, in need of later conceptual arrangement. Lie groups, and in particular their representation theory, explain what a spherical function can be in general; from 1950 onwards substantial parts of classical theory could be recast in terms of Lie groups. Further, work on algebraic combinatorics also revived interest in older parts of the theory. Conjectures of Ian G. Macdonald helped to open up large and active new fields with the typical special function flavour. Difference equations have begun to take their place besides differential equations as a source for special functions.
Special functions in number theory
In number theory, certain special functions have traditionally been studied, such as particular Dirichlet series and modular forms. Almost all aspects of special function theory are reflected there, as well as some new ones, such as came out of the monstrous moonshine theory.
Special functions of matrix arguments
Analogues of several special functions have been defined on the space of positive definite matrices, among them the power function which goes back to Atle Selberg,[6] the multivariate gamma function,[7] and types of Bessel functions.[8]
The NIST Digital Library of Mathematical Functions has a section covering several special functions of matrix arguments.[9]
Researchers
• George Andrews
• Richard Askey
• Harold Exton
• George Gasper
• Wolfgang Hahn
• Mizan Rahman
• Mourad E. H. Ismail
• Tom Koornwinder
• Waleed Al-Salam
• Dennis Stanton
• Theodore S. Chihara
• James A. Wilson
• Erik Koelink
• Eric Rains
• Arpad Baricz
See also
• List of mathematical functions
• List of special functions and eponyms
• Elementary function
References
1. Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (8 ed.). Academic Press, Inc. ISBN 978-0-12-384933-5. LCCN 2014010276.
2. Abramowitz, Milton; Stegun, Irene A. (1964). Handbook of Mathematical Functions. U.S. Department of Commerce, National Bureau of Standards.
3. Tannery, Jules (1972). Éléments de la théorie des fonctions elliptiques. Chelsea. ISBN 0-8284-0257-4. OCLC 310702720.
4. Whittaker, E. T.; Watson, G. N. (1996-09-13). A Course of Modern Analysis. Cambridge University Press. ISBN 978-0-521-58807-2.
5. Vilenkin, N.J. (1968). Special Functions and the Theory of Group Representations. Providence, RI: American Mathematical Society. p. iii. ISBN 978-0-8218-1572-4.
6. Terras 2016, p. 44.
7. Terras 2016, p. 47.
8. Terras 2016, pp. 56ff.
9. D. St. P. Richards (n.d.). "Chapter 35 Functions of Matrix Argument". Digital Library of Mathematical Functions. Retrieved 23 July 2022.
Bibliography
• Andrews, George E.; Askey, Richard; Roy, Ranjan (1999). Special functions. Encyclopedia of Mathematics and its Applications. Vol. 71. Cambridge University Press. ISBN 978-0-521-62321-6. MR 1688958.
• Terras, Audrey (2016). Harmonic analysis on symmetric spaces – Higher rank spaces, positive definite matrix space and generalizations (second ed.). Springer Nature. ISBN 978-1-4939-3406-5. MR 3496932.
• Whittaker, E. T.; Watson, G. N. (1996-09-13). A Course of Modern Analysis. Cambridge University Press. ISBN 978-0-521-58807-2.
External links
• National Institute of Standards and Technology, United States Department of Commerce. NIST Digital Library of Mathematical Functions. Archived from the original on December 13, 2018.
• Weisstein, Eric W. "Special Function". MathWorld.
• Online calculator, Online scientific calculator with over 100 functions (>=32 digits, many complex) (German language)
• Special functions at EqWorld: The World of Mathematical Equations
• Special functions and polynomials by Gerard 't Hooft and Stefan Nobbenhuis (April 8, 2013)
• Numerical Methods for Special Functions, by A. Gil, J. Segura, N.M. Temme (2007).
• R. Jagannathan, (P,Q)-Special Functions
• Specialfunctionswiki
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
Authority control: National
• France
• BnF data
• Israel
• United States
• Latvia
• Japan
• Czech Republic
|
Wikipedia
|
Special group (algebraic group theory)
In the theory of algebraic groups, a special group is a linear algebraic group G with the property that every principal G-bundle is locally trivial in the Zariski topology. Special groups include the general linear group, the special linear group, and the symplectic group. Special groups are necessarily connected. Products of special groups are special. The projective linear group is not special because there exist Azumaya algebras, which are trivial over a finite separable extension, but not over the base field.
Special groups were defined in 1958 by Jean-Pierre Serre[1] and classified soon thereafter by Alexander Grothendieck.[2]
References
1. Serre, Jean-Pierre (1958). "Espaces fibrés algébriques". Séminaire Claude Chevalley (in French). 3 – via Numdam.
2. Grothendieck, Alexander (1958). "Torsion homologique et sections rationnelles". Séminaire Claude Chevalley (in French). 3 – via Numdam.
|
Wikipedia
|
Special group (finite group theory)
In group theory, a discipline within abstract algebra, a special group is a finite group of prime power order that is either elementary abelian itself or of class 2 with its derived group, its center, and its Frattini subgroup all equal and elementary abelian (Gorenstein 1980, p.183). A special group of order pn that has class 2 and whose derived group has order p is called an extra special group.
References
• Gorenstein, D. (1980), Finite groups (2nd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0301-6, MR 0569209
|
Wikipedia
|
Special linear Lie algebra
In mathematics, the special linear Lie algebra of order n (denoted ${\mathfrak {sl}}_{n}(F)$ or ${\mathfrak {sl}}(n,F)$) is the Lie algebra of $n\times n$ matrices with trace zero and with the Lie bracket $[X,Y]:=XY-YX$. This algebra is well studied and understood, and is often used as a model for the study of other Lie algebras. The Lie group that it generates is the special linear group.
Lie groups and Lie algebras
Classical groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
Simple Lie groups
Classical
• An
• Bn
• Cn
• Dn
Exceptional
• G2
• F4
• E6
• E7
• E8
Other Lie groups
• Circle
• Lorentz
• Poincaré
• Conformal group
• Diffeomorphism
• Loop
• Euclidean
Lie algebras
• Lie group–Lie algebra correspondence
• Exponential map
• Adjoint representation
• Killing form
• Index
• Simple Lie algebra
• Loop algebra
• Affine Lie algebra
Semisimple Lie algebra
• Dynkin diagrams
• Cartan subalgebra
• Root system
• Weyl group
• Real form
• Complexification
• Split Lie algebra
• Compact Lie algebra
Representation theory
• Lie group representation
• Lie algebra representation
• Representation theory of semisimple Lie algebras
• Representations of classical Lie groups
• Theorem of the highest weight
• Borel–Weil–Bott theorem
Lie groups in physics
• Particle physics and representation theory
• Lorentz group representations
• Poincaré group representations
• Galilean group representations
Scientists
• Sophus Lie
• Henri Poincaré
• Wilhelm Killing
• Élie Cartan
• Hermann Weyl
• Claude Chevalley
• Harish-Chandra
• Armand Borel
• Glossary
• Table of Lie groups
Applications
The Lie algebra ${\mathfrak {sl}}_{2}(\mathbb {C} )$ is central to the study of special relativity, general relativity and supersymmetry: its fundamental representation is the so-called spinor representation, while its adjoint representation generates the Lorentz group SO(3,1) of special relativity.
The algebra ${\mathfrak {sl}}_{2}(\mathbb {R} )$ plays an important role in the study of chaos and fractals, as it generates the Möbius group SL(2,R), which describes the automorphisms of the hyperbolic plane, the simplest Riemann surface of negative curvature; by contrast, SL(2,C) describes the automorphisms of the hyperbolic 3-dimensional ball.
Representation theory
See also: Representation theory of semisimple Lie algebras
Representation theory of sl2ℂ
The Lie algebra ${\mathfrak {sl}}_{2}\mathbb {C} $ is a three-dimensional complex Lie algebra. Its defining feature is that it contains a basis $e,h,f$ satisfying the commutation relations
$[e,f]=h$, $[h,f]=-2f$, and $[h,e]=2e$.
This is a Cartan-Weyl basis for ${\mathfrak {sl}}_{2}\mathbb {C} $. It has an explicit realization in terms of two-by-two complex matrices with zero trace:
$E={\begin{bmatrix}0&1\\0&0\end{bmatrix}}$, $F={\begin{bmatrix}0&0\\1&0\end{bmatrix}}$, $H={\begin{bmatrix}1&0\\0&-1\end{bmatrix}}$.
This is the fundamental or defining representation for ${\mathfrak {sl}}_{2}\mathbb {C} $.
The Lie algebra ${\mathfrak {sl}}_{2}\mathbb {C} $ can be viewed as a subspace of its universal enveloping algebra $U=U({\mathfrak {sl}}_{2}\mathbb {C} )$ and, in $U$, there are the following commutator relations shown by induction:[1]
$[h,f^{k}]=-2kf^{k},\,[h,e^{k}]=2ke^{k}$,
$[e,f^{k}]=-k(k-1)f^{k-1}+kf^{k-1}h$.
Note that, here, the powers $f^{k}$, etc. refer to powers as elements of the algebra U and not matrix powers. The first basic fact (that follows from the above commutator relations) is:[1]
Lemma — Let $V$ be a representation of ${\mathfrak {sl}}_{2}\mathbb {C} $ and $v$ a vector in it. Set $v_{j}={1 \over j!}f^{j}\cdot v$ for each $j=0,1,\dots $. If $v$ is an eigenvector of the action of $h$; i.e., $h\cdot v=\lambda v$ for some complex number $\lambda $, then, for each $j=0,1,\dots $,
• $h\cdot v_{j}=(\lambda -2j)v_{j}$.
• $e\cdot v_{j}={1 \over j!}f^{j}\cdot e\cdot v+(\lambda -j+1)v_{j-1}$.
• $f\cdot v_{j}=(j+1)v_{j+1}$.
From this lemma, one deduces the following fundamental result:[2]
Theorem — Let $V$ be a representation of ${\mathfrak {sl}}_{2}\mathbb {C} $ that may have infinite dimension and $v$ a vector in $V$ that is a ${\mathfrak {b}}=\mathbb {C} h+\mathbb {C} e$-weight vector (${\mathfrak {b}}$ is a Borel subalgebra).[3] Then
• Those $v_{j}$'s that are nonzero are linearly independent.
• If some $v_{j}$ is zero, then the $h$-eigenvalue of v is a nonnegative integer $N\geq 0$ such that $v_{0},v_{1},\dots ,v_{N}$ are nonzero and $v_{N+1}=v_{N+2}=\cdots =0$. Moreover, the subspace spanned by the $v_{j}$'s is an irreducible ${\mathfrak {sl}}_{2}(\mathbb {C} )$-subrepresentation of $V$.
The first statement is true since either $v_{j}$ is zero or has $h$-eigenvalue distinct from the eigenvalues of the others that are nonzero. Saying $v$ is a ${\mathfrak {b}}$-weight vector is equivalent to saying that it is simultaneously an eigenvector of $h,e$; a short calculation then shows that, in that case, the $e$-eigenvalue of $v$ is zero: $e\cdot v=0$. Thus, for some integer $N\geq 0$, $v_{N}\neq 0,v_{N+1}=v_{N+2}=\cdots =0$ and in particular, by the early lemma,
$0=e\cdot v_{N+1}=(\lambda -(N+1)+1)v_{N},$
which implies that $\lambda =N$. It remains to show $W=\operatorname {span} \{v_{j}|j\geq 0\}$ is irreducible. If $0\neq W'\subset W$ is a subrepresentation, then it admits an eigenvector, which must have eigenvalue of the form $N-2j$; thus is proportional to $v_{j}$. By the preceding lemma, we have $v=v_{0}$ is in $W$ and thus $W'=W$. $\square $
As a corollary, one deduces:
• If $V$ has finite dimension and is irreducible, then $h$-eigenvalue of v is a nonnegative integer $N$ and $V$ has a basis $v,fv,f^{2}v,\cdots ,f^{N}v$.
• Conversely, if the $h$-eigenvalue of $v$ is a nonnegative integer and $V$ is irreducible, then $V$ has a basis $v,fv,f^{2}v,\cdots ,f^{N}v$; in particular has finite dimension.
The beautiful special case of ${\mathfrak {sl}}_{2}$ shows a general way to find irreducible representations of Lie algebras. Namely, we divide the algebra to three subalgebras "h" (the Cartan Subalgebra), "e", and "f", which behave approximately like their namesakes in ${\mathfrak {sl}}_{2}$. Namely, in an irreducible representation, we have a "highest" eigenvector of "h", on which "e" acts by zero. The basis of the irreducible representation is generated by the action of "f" on the highest eigenvectors of "h". See the theorem of the highest weight.
Representation theory of slnℂ
When ${\mathfrak {g}}={\mathfrak {sl}}_{n}\mathbb {C} ={\mathfrak {sl}}(V)$ for a complex vector space $V$ of dimension $n$, each finite-dimensional irreducible representation of ${\mathfrak {g}}$ can be found as a subrepresentation of a tensor power of $V$.[4]
The Lie algebra can be explicitly realized as a matrix Lie algebra of traceless $n\times n$ matrices. This is the fundamental representation for ${\mathfrak {sl}}_{n}\mathbb {C} $.
Set $M_{i,j}$ to be the matrix with one in the $i,j$ entry and zeroes everywhere else. Then
$H_{i}:=M_{i,i}-M_{i+1,i+1},{\text{ with }}1\leq i\leq n-1$
$M_{i,j},{\text{ with }}i\neq j$
Form a basis for ${\mathfrak {sl}}_{n}\mathbb {C} $. This is technically an abuse of notation, and these are really the image of the basis of ${\mathfrak {sl}}_{n}\mathbb {C} $ in the fundamental representation.
Furthermore, this is in fact a Cartan–Weyl basis, with the $H_{i}$ spanning the Cartan subalgebra. Introducing notation $E_{i,j}=M_{i,j}$ if $j>i$, and $F_{i,j}=M_{i,j}^{T}=M_{j,i}$, also if $j>i$, the $E_{i,j}$ are positive roots and $F_{i,j}$ are corresponding negative roots.
A basis of simple roots is given by $E_{i,i+1}$ for $1\leq i\leq n-1$.
Notes
1. Kac 2003, § 3.2. harvnb error: no target: CITEREFKac2003 (help)
2. Serre 2001, Ch IV, § 3, Theorem 1. Corollary 1. harvnb error: no target: CITEREFSerre2001 (help)
3. Such a $v$ is also commonly called a primitive element of $V$.
4. Serre 2000, Ch. VII, § 6. harvnb error: no target: CITEREFSerre2000 (help)
References
• Etingof, Pavel. "Lecture Notes on Representation Theory".
• Kac, Victor (1990). Infinite dimensional Lie algebras (3rd ed.). Cambridge University Press. ISBN 0-521-46693-8.
• Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer
• A. L. Onishchik, E. B. Vinberg, V. V. Gorbatsevich, Structure of Lie groups and Lie algebras. Lie groups and Lie algebras, III. Encyclopaedia of Mathematical Sciences, 41. Springer-Verlag, Berlin, 1994. iv+248 pp. (A translation of Current problems in mathematics. Fundamental directions. Vol. 41, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1990. Translation by V. Minachin. Translation edited by A. L. Onishchik and E. B. Vinberg) ISBN 3-540-54683-9
• V. L. Popov, E. B. Vinberg, Invariant theory. Algebraic geometry. IV. Linear algebraic groups. Encyclopaedia of Mathematical Sciences, 55. Springer-Verlag, Berlin, 1994. vi+284 pp. (A translation of Algebraic geometry. 4, Akad. Nauk SSSR Vsesoyuz. Inst. Nauchn. i Tekhn. Inform., Moscow, 1989. Translation edited by A. N. Parshin and I. R. Shafarevich) ISBN 3-540-54682-0
• Serre, Jean-Pierre (2000), Algèbres de Lie semi-simples complexes [Complex Semisimple Lie Algebras], translated by Jones, G. A., Springer, ISBN 978-3-540-67827-4.
See also
• Affine Weyl group
• Finite Coxeter group
• Hasse diagram
• Linear algebraic group
• Nilpotent orbit
• Root system
• sl2-triple
• Weyl group
|
Wikipedia
|
Special linear group
In mathematics, the special linear group SL(n, F) of degree n over a field F is the set of n × n matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant
$\det \colon \operatorname {GL} (n,F)\to F^{\times }.$
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Lie groups and Lie algebras
Classical groups
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
Simple Lie groups
Classical
• An
• Bn
• Cn
• Dn
Exceptional
• G2
• F4
• E6
• E7
• E8
Other Lie groups
• Circle
• Lorentz
• Poincaré
• Conformal group
• Diffeomorphism
• Loop
• Euclidean
Lie algebras
• Lie group–Lie algebra correspondence
• Exponential map
• Adjoint representation
• Killing form
• Index
• Simple Lie algebra
• Loop algebra
• Affine Lie algebra
Semisimple Lie algebra
• Dynkin diagrams
• Cartan subalgebra
• Root system
• Weyl group
• Real form
• Complexification
• Split Lie algebra
• Compact Lie algebra
Representation theory
• Lie group representation
• Lie algebra representation
• Representation theory of semisimple Lie algebras
• Representations of classical Lie groups
• Theorem of the highest weight
• Borel–Weil–Bott theorem
Lie groups in physics
• Particle physics and representation theory
• Lorentz group representations
• Poincaré group representations
• Galilean group representations
Scientists
• Sophus Lie
• Henri Poincaré
• Wilhelm Killing
• Élie Cartan
• Hermann Weyl
• Claude Chevalley
• Harish-Chandra
• Armand Borel
• Glossary
• Table of Lie groups
where F× is the multiplicative group of F (that is, F excluding 0).
These elements are "special" in that they form an algebraic subvariety of the general linear group – they satisfy a polynomial equation (since the determinant is polynomial in the entries).
When F is a finite field of order q, the notation SL(n, q) is sometimes used.
Geometric interpretation
The special linear group SL(n, R) can be characterized as the group of volume and orientation preserving linear transformations of Rn; this corresponds to the interpretation of the determinant as measuring change in volume and orientation.
Lie subgroup
When F is R or C, SL(n, F) is a Lie subgroup of GL(n, F) of dimension n2 − 1. The Lie algebra ${\mathfrak {sl}}(n,F)$ of SL(n, F) consists of all n × n matrices over F with vanishing trace. The Lie bracket is given by the commutator.
Topology
Any invertible matrix can be uniquely represented according to the polar decomposition as the product of a unitary matrix and a hermitian matrix with positive eigenvalues. The determinant of the unitary matrix is on the unit circle while that of the hermitian matrix is real and positive and since in the case of a matrix from the special linear group the product of these two determinants must be 1, then each of them must be 1. Therefore, a special linear matrix can be written as the product of a special unitary matrix (or special orthogonal matrix in the real case) and a positive definite hermitian matrix (or symmetric matrix in the real case) having determinant 1.
Thus the topology of the group SL(n, C) is the product of the topology of SU(n) and the topology of the group of hermitian matrices of unit determinant with positive eigenvalues. A hermitian matrix of unit determinant and having positive eigenvalues can be uniquely expressed as the exponential of a traceless hermitian matrix, and therefore the topology of this is that of (n2 − 1)-dimensional Euclidean space.[1] Since SU(n) is simply connected,[2] we conclude that SL(n, C) is also simply connected, for all n greater than or equal to 2.
The topology of SL(n, R) is the product of the topology of SO(n) and the topology of the group of symmetric matrices with positive eigenvalues and unit determinant. Since the latter matrices can be uniquely expressed as the exponential of symmetric traceless matrices, then this latter topology is that of (n + 2)(n − 1)/2-dimensional Euclidean space. Thus, the group SL(n, R) has the same fundamental group as SO(n), that is, Z for n = 2 and Z2 for n > 2.[3] In particular this means that SL(n, R), unlike SL(n, C), is not simply connected, for n greater than 1.
Relations to other subgroups of GL(n, A)
See also: Whitehead's lemma
Two related subgroups, which in some cases coincide with SL, and in other cases are accidentally conflated with SL, are the commutator subgroup of GL, and the group generated by transvections. These are both subgroups of SL (transvections have determinant 1, and det is a map to an abelian group, so [GL, GL] ≤ SL), but in general do not coincide with it.
The group generated by transvections is denoted E(n, A) (for elementary matrices) or TV(n, A). By the second Steinberg relation, for n ≥ 3, transvections are commutators, so for n ≥ 3, E(n, A) ≤ [GL(n, A), GL(n, A)].
For n = 2, transvections need not be commutators (of 2 × 2 matrices), as seen for example when A is F2, the field of two elements, then
$\operatorname {Alt} (3)\cong [\operatorname {GL} (2,\mathbf {F} _{2}),\operatorname {GL} (2,\mathbf {F} _{2})]<\operatorname {E} (2,\mathbf {F} _{2})=\operatorname {SL} (2,\mathbf {F} _{2})=\operatorname {GL} (2,\mathbf {F} _{2})\cong \operatorname {Sym} (3),$
where Alt(3) and Sym(3) denote the alternating resp. symmetric group on 3 letters.
However, if A is a field with more than 2 elements, then E(2, A) = [GL(2, A), GL(2, A)], and if A is a field with more than 3 elements, E(2, A) = [SL(2, A), SL(2, A)].
In some circumstances these coincide: the special linear group over a field or a Euclidean domain is generated by transvections, and the stable special linear group over a Dedekind domain is generated by transvections. For more general rings the stable difference is measured by the special Whitehead group SK1(A) := SL(A)/E(A), where SL(A) and E(A) are the stable groups of the special linear group and elementary matrices.
Generators and relations
If working over a ring where SL is generated by transvections (such as a field or Euclidean domain), one can give a presentation of SL using transvections with some relations. Transvections satisfy the Steinberg relations, but these are not sufficient: the resulting group is the Steinberg group, which is not the special linear group, but rather the universal central extension of the commutator subgroup of GL.
A sufficient set of relations for SL(n, Z) for n ≥ 3 is given by two of the Steinberg relations, plus a third relation (Conder, Robertson & Williams 1992, p. 19). Let Tij := eij(1) be the elementary matrix with 1's on the diagonal and in the ij position, and 0's elsewhere (and i ≠ j). Then
${\begin{aligned}\left[T_{ij},T_{jk}\right]&=T_{ik}&&{\text{for }}i\neq k\\[4pt]\left[T_{ij},T_{k\ell }\right]&=\mathbf {1} &&{\text{for }}i\neq \ell ,j\neq k\\[4pt]\left(T_{12}T_{21}^{-1}T_{12}\right)^{4}&=\mathbf {1} \end{aligned}}$
are a complete set of relations for SL(n, Z), n ≥ 3.
SL±(n,F)
In characteristic other than 2, the set of matrices with determinant ±1 form another subgroup of GL, with SL as an index 2 subgroup (necessarily normal); in characteristic 2 this is the same as SL. This forms a short exact sequence of groups:
$\mathrm {SL} (n,F)\to \mathrm {SL} ^{\pm }(n,F)\to \{\pm 1\}.$
This sequence splits by taking any matrix with determinant −1, for example the diagonal matrix $(-1,1,\dots ,1).$ If $n=2k+1$ is odd, the negative identity matrix $-I$ is in SL±(n,F) but not in SL(n,F) and thus the group splits as an internal direct product $SL^{\pm }(2k+1,F)\cong SL(2k+1,F)\times \{\pm I\}$. However, if $n=2k$ is even, $-I$ is already in SL(n,F) , SL± does not split, and in general is a non-trivial group extension.
Over the real numbers, SL±(n, R) has two connected components, corresponding to SL(n, R) and another component, which are isomorphic with identification depending on a choice of point (matrix with determinant −1). In odd dimension these are naturally identified by $-I$, but in even dimension there is no one natural identification.
Structure of GL(n,F)
The group GL(n, F) splits over its determinant (we use F× ≅ GL(1, F) → GL(n, F) as the monomorphism from F× to GL(n, F), see semidirect product), and therefore GL(n, F) can be written as a semidirect product of SL(n, F) by F×:
GL(n, F) = SL(n, F) ⋊ F×.
See also
• SL(2, R)
• SL(2, C)
• Modular group
• Projective linear group
• Conformal map
• Representations of classical Lie groups
References
1. Hall 2015 Section 2.5
2. Hall 2015 Proposition 13.11
3. Hall 2015 Sections 13.2 and 13.3
• Conder, Marston; Robertson, Edmund; Williams, Peter (1992), "Presentations for 3-dimensional special linear groups over integer rings", Proceedings of the American Mathematical Society, American Mathematical Society, 115 (1): 19–26, doi:10.2307/2159559, JSTOR 2159559, MR 1079696
• Hall, Brian C. (2015), Lie groups, Lie algebras, and representations: An elementary introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer
|
Wikipedia
|
Brill–Noether theory
In algebraic geometry, Brill–Noether theory, introduced by Alexander von Brill and Max Noether (1874), is the study of special divisors, certain divisors on a curve C that determine more compatible functions than would be predicted. In classical language, special divisors move on the curve in a "larger than expected" linear system of divisors.
Throughout, we consider a projective smooth curve over the complex numbers (or over some other algebraically closed field).
The condition to be a special divisor D can be formulated in sheaf cohomology terms, as the non-vanishing of the H1 cohomology of the sheaf of sections of the invertible sheaf or line bundle associated to D. This means that, by the Riemann–Roch theorem, the H0 cohomology or space of holomorphic sections is larger than expected.
Alternatively, by Serre duality, the condition is that there exist holomorphic differentials with divisor ≥ –D on the curve.
Main theorems of Brill–Noether theory
For a given genus g, the moduli space for curves C of genus g should contain a dense subset parameterizing those curves with the minimum in the way of special divisors. One goal of the theory is to 'count constants', for those curves: to predict the dimension of the space of special divisors (up to linear equivalence) of a given degree d, as a function of g, that must be present on a curve of that genus.
The basic statement can be formulated in terms of the Picard variety Pic(C) of a smooth curve C, and the subset of Pic(C) corresponding to divisor classes of divisors D, with given values d of deg(D) and r of l(D) – 1 in the notation of the Riemann–Roch theorem. There is a lower bound ρ for the dimension dim(d, r, g) of this subscheme in Pic(C):
$\dim(d,r,g)\geq \rho =g-(r+1)(g-d+r)$
called the Brill–Noether number. The formula can be memorized via the mnemonic (using our desired $h^{0}(D)=r+1$ and Riemann-Roch)
$g-(r+1)(g-d+r)=g-h^{0}(D)h^{1}(D)$
For smooth curves C and for d ≥ 1, r ≥ 0 the basic results about the space $G_{d}^{r}$ of linear systems on C of degree d and dimension r are as follows.
• George Kempf proved that if ρ ≥ 0 then $G_{d}^{r}$ is not empty, and every component has dimension at least ρ.
• William Fulton and Robert Lazarsfeld proved that if ρ ≥ 1 then $G_{d}^{r}$ is connected.
• Griffiths & Harris (1980) showed that if C is generic then $G_{d}^{r}$ is reduced and all components have dimension exactly ρ (so in particular $G_{d}^{r}$ is empty if ρ < 0).
• David Gieseker proved that if C is generic then $G_{d}^{r}$ is smooth. By the connectedness result this implies it is irreducible if ρ > 0.
Other more recent results not necessarily in terms of space $G_{d}^{r}$ of linear systems are:
• Eric Larson (2017) proved that if ρ ≥ 0, r ≥ 3, and n ≥ 1, the restriction maps $H^{0}({\mathcal {O}}_{\mathbb {P} ^{r}}(n))\rightarrow H^{0}({\mathcal {O}}_{C}(n))$ are of maximal rank, also known as the maximal rank conjecture.[1][2]
• Eric Larson and Isabel Vogt (2022) proved that if ρ ≥ 0 then there is a curve C interpolating through n general points in $\mathbb {P} ^{r}$ if and only if $(r-1)n\leq (r+1)d-(r-3)(g-1),$ except in 4 exceptional cases: (d, g, r) ∈ {(5,2,3),(6,4,3),(7,2,5),(10,6,5)}.[3][4]
References
• Barbon, Andrea (2014). Algebraic Brill–Noether Theory (PDF) (Master's thesis). Radboud University Nijmegen.
• Arbarello, Enrico; Cornalba, Maurizio; Griffiths, Philip A.; Harris, Joe (1985). "The Basic Results of the Brill–Noether Theory". Geometry of Algebraic Curves. Grundlehren der Mathematischen Wissenschaften 267. Vol. I. pp. 203–224. doi:10.1007/978-1-4757-5323-3_5. ISBN 0-387-90997-4.
• von Brill, Alexander; Noether, Max (1874). "Ueber die algebraischen Functionen und ihre Anwendung in der Geometrie". Mathematische Annalen. 7 (2): 269–316. doi:10.1007/BF02104804. JFM 06.0251.01. S2CID 120777748. Retrieved 2009-08-22.
• Griffiths, Phillip; Harris, Joseph (1980). "On the variety of special linear systems on a general algebraic curve". Duke Mathematical Journal. 47 (1): 233–272. doi:10.1215/s0012-7094-80-04717-1. MR 0563378.
• Eduardo Casas-Alvero (2019). Algebraic Curves, the Brill and Noether way. Universitext. Springer. ISBN 9783030290153.
• Philip A. Griffiths; Joe Harris (1994). Principles of Algebraic Geometry. Wiley Classics Library. Wiley Interscience. p. 245. ISBN 978-0-471-05059-9.
Notes
1. Larson, Eric (2018-09-18). "The Maximal Rank Conjecture". arXiv:1711.04906 [math.AG].
2. Hartnett, Kevin (2018-09-05). "Tinkertoy Models Produce New Geometric Insights". Quanta Magazine. Retrieved 2022-08-28.
3. Larson, Eric; Vogt, Isabel (2022-05-05). "Interpolation for Brill--Noether curves". arXiv:2201.09445 [math.AG].
4. "Old Problem About Algebraic Curves Falls to Young Mathematicians". Quanta Magazine. 2022-08-25. Retrieved 2022-08-28.
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
|
Wikipedia
|
Special number field sieve
In number theory, a branch of mathematics, the special number field sieve (SNFS) is a special-purpose integer factorization algorithm. The general number field sieve (GNFS) was derived from it.
The special number field sieve is efficient for integers of the form re ± s, where r and s are small (for instance Mersenne numbers).
Heuristically, its complexity for factoring an integer $n$ is of the form:[1]
$\exp \left(\left(1+o(1)\right)\left({\tfrac {32}{9}}\log n\right)^{1/3}\left(\log \log n\right)^{2/3}\right)=L_{n}\left[1/3,(32/9)^{1/3}\right]$
in O and L-notations.
The SNFS has been used extensively by NFSNet (a volunteer distributed computing effort), NFS@Home and others to factorise numbers of the Cunningham project; for some time the records for integer factorization have been numbers factored by SNFS.
Overview of method
The SNFS is based on an idea similar to the much simpler rational sieve; in particular, readers may find it helpful to read about the rational sieve first, before tackling the SNFS.
The SNFS works as follows. Let n be the integer we want to factor. As in the rational sieve, the SNFS can be broken into two steps:
• First, find a large number of multiplicative relations among a factor base of elements of Z/nZ, such that the number of multiplicative relations is larger than the number of elements in the factor base.
• Second, multiply together subsets of these relations in such a way that all the exponents are even, resulting in congruences of the form a2≡b2 (mod n). These in turn immediately lead to factorizations of n: n=gcd(a+b,n)×gcd(a-b,n). If done right, it is almost certain that at least one such factorization will be nontrivial.
The second step is identical to the case of the rational sieve, and is a straightforward linear algebra problem. The first step, however, is done in a different, more efficient way than the rational sieve, by utilizing number fields.
Details of method
Let n be the integer we want to factor. We pick an irreducible polynomial f with integer coefficients, and an integer m such that f(m)≡0 (mod n) (we will explain how they are chosen in the next section). Let α be a root of f; we can then form the ring Z[α]. There is a unique ring homomorphism φ from Z[α] to Z/nZ that maps α to m. For simplicity, we'll assume that Z[α] is a unique factorization domain; the algorithm can be modified to work when it isn't, but then there are some additional complications.
Next, we set up two parallel factor bases, one in Z[α] and one in Z. The one in Z[α] consists of all the prime ideals in Z[α] whose norm is bounded by a chosen value $N_{\max }$. The factor base in Z, as in the rational sieve case, consists of all prime integers up to some other bound.
We then search for relatively prime pairs of integers (a,b) such that:
• a+bm is smooth with respect to the factor base in Z (i.e., it is a product of elements in the factor base).
• a+bα is smooth with respect to the factor base in Z[α]; given how we chose the factor base, this is equivalent to the norm of a+bα being divisible only by primes less than $N_{\max }$.
These pairs are found through a sieving process, analogous to the Sieve of Eratosthenes; this motivates the name "Number Field Sieve".
For each such pair, we can apply the ring homomorphism φ to the factorization of a+bα, and we can apply the canonical ring homomorphism from Z to Z/nZ to the factorization of a+bm. Setting these equal gives a multiplicative relation among elements of a bigger factor base in Z/nZ, and if we find enough pairs we can proceed to combine the relations and factor n, as described above.
Choice of parameters
Not every number is an appropriate choice for the SNFS: you need to know in advance a polynomial f of appropriate degree (the optimal degree is conjectured to be $\left(3{\frac {\log N}{\log \log N}}\right)^{\frac {1}{3}}$, which is 4, 5, or 6 for the sizes of N currently feasible to factorise) with small coefficients, and a value x such that $f(x)\equiv 0{\pmod {N}}$ where N is the number to factorise. There is an extra condition: x must satisfy $ax+b\equiv 0{\pmod {N}}$ for a and b no bigger than $N^{1/d}$.
One set of numbers for which such polynomials exist are the $a^{b}\pm 1$ numbers from the Cunningham tables; for example, when NFSNET factored $3^{479}+1$, they used the polynomial $x^{6}+3$ with $x=3^{80}$, since $(3^{80})^{6}+3=3^{480}+3$, and $3^{480}+3\equiv 0{\pmod {3^{479}+1}}$.
Numbers defined by linear recurrences, such as the Fibonacci and Lucas numbers, also have SNFS polynomials, but these are a little more difficult to construct. For example, $F_{709}$ has polynomial $n^{5}+10n^{3}+10n^{2}+10n+3$, and the value of x satisfies $F_{142}x-F_{141}=0$.[2]
If you already know some factors of a large SNFS-number, you can do the SNFS calculation modulo the remaining part; for the NFSNET example above, $3^{479}+1=(4\times 158071\times 7167757\times 7759574882776161031)$ times a 197-digit composite number (the small factors were removed by ECM), and the SNFS was performed modulo the 197-digit number. The number of relations required by SNFS still depends on the size of the large number, but the individual calculations are quicker modulo the smaller number.
Limitations of algorithm
This algorithm, as mentioned above, is very efficient for numbers of the form re±s, for r and s relatively small. It is also efficient for any integers which can be represented as a polynomial with small coefficients. This includes integers of the more general form are±bsf, and also for many integers whose binary representation has low Hamming weight. The reason for this is as follows: The Number Field Sieve performs sieving in two different fields. The first field is usually the rationals. The second is a higher degree field. The efficiency of the algorithm strongly depends on the norms of certain elements in these fields. When an integer can be represented as a polynomial with small coefficients, the norms that arise are much smaller than those that arise when an integer is represented by a general polynomial. The reason is that a general polynomial will have much larger coefficients, and the norms will be correspondingly larger. The algorithm attempts to factor these norms over a fixed set of prime numbers. When the norms are smaller, these numbers are more likely to factor.
See also
• General number field sieve
References
1. Pomerance, Carl (December 1996), "A Tale of Two Sieves" (PDF), Notices of the AMS, vol. 43, no. 12, pp. 1473–1485
2. Franke, Jens. "Installation notes for ggnfs-lasieve4". MIT Massachusetts Institute of Technology.
Further reading
• Byrnes, Steven (May 18, 2005), "The Number Field Sieve" (PDF), Math 129
• Lenstra, A. K.; Lenstra, H. W., Jr.; Manasse, M. S. & Pollard, J. M. (1993), "The Factorization of the Ninth Fermat Number", Mathematics of Computation, 61 (203): 319–349, Bibcode:1993MaCom..61..319L, doi:10.1090/S0025-5718-1993-1182953-4{{citation}}: CS1 maint: multiple names: authors list (link)
• Lenstra, A. K.; Lenstra, H. W., Jr., eds. (1993), The Development of the Number Field Sieve, Lecture Notes in Mathematics, vol. 1554, New York: Springer-Verlag, ISBN 978-3-540-57013-4{{citation}}: CS1 maint: multiple names: editors list (link)
• Silverman, Robert D. (2007), "Optimal Parameterization of SNFS", Journal of Mathematical Cryptology, 1 (2): 105–124, CiteSeerX 10.1.1.12.2975, doi:10.1515/JMC.2007.007, S2CID 16236028
Number-theoretic algorithms
Primality tests
• AKS
• APR
• Baillie–PSW
• Elliptic curve
• Pocklington
• Fermat
• Lucas
• Lucas–Lehmer
• Lucas–Lehmer–Riesel
• Proth's theorem
• Pépin's
• Quadratic Frobenius
• Solovay–Strassen
• Miller–Rabin
Prime-generating
• Sieve of Atkin
• Sieve of Eratosthenes
• Sieve of Pritchard
• Sieve of Sundaram
• Wheel factorization
Integer factorization
• Continued fraction (CFRAC)
• Dixon's
• Lenstra elliptic curve (ECM)
• Euler's
• Pollard's rho
• p − 1
• p + 1
• Quadratic sieve (QS)
• General number field sieve (GNFS)
• Special number field sieve (SNFS)
• Rational sieve
• Fermat's
• Shanks's square forms
• Trial division
• Shor's
Multiplication
• Ancient Egyptian
• Long
• Karatsuba
• Toom–Cook
• Schönhage–Strassen
• Fürer's
Euclidean division
• Binary
• Chunking
• Fourier
• Goldschmidt
• Newton-Raphson
• Long
• Short
• SRT
Discrete logarithm
• Baby-step giant-step
• Pollard rho
• Pollard kangaroo
• Pohlig–Hellman
• Index calculus
• Function field sieve
Greatest common divisor
• Binary
• Euclidean
• Extended Euclidean
• Lehmer's
Modular square root
• Cipolla
• Pocklington's
• Tonelli–Shanks
• Berlekamp
• Kunerth
Other algorithms
• Chakravala
• Cornacchia
• Exponentiation by squaring
• Integer square root
• Integer relation (LLL; KZ)
• Modular exponentiation
• Montgomery reduction
• Schoof
• Trachtenberg system
• Italics indicate that algorithm is for numbers of special forms
|
Wikipedia
|
Semimartingale
In probability theory, a real valued stochastic process X is called a semimartingale if it can be decomposed as the sum of a local martingale and a càdlàg adapted finite-variation process. Semimartingales are "good integrators", forming the largest class of processes with respect to which the Itô integral and the Stratonovich integral can be defined.
The class of semimartingales is quite large (including, for example, all continuously differentiable processes, Brownian motion and Poisson processes). Submartingales and supermartingales together represent a subset of the semimartingales.
Definition
A real valued process X defined on the filtered probability space (Ω,F,(Ft)t ≥ 0,P) is called a semimartingale if it can be decomposed as
$X_{t}=M_{t}+A_{t}$
where M is a local martingale and A is a càdlàg adapted process of locally bounded variation.
An Rn-valued process X = (X1,…,Xn) is a semimartingale if each of its components Xi is a semimartingale.
Alternative definition
First, the simple predictable processes are defined to be linear combinations of processes of the form Ht = A1{t > T} for stopping times T and FT -measurable random variables A. The integral H · X for any such simple predictable process H and real valued process X is
$H\cdot X_{t}\equiv 1_{\{t>T\}}A(X_{t}-X_{T}).$
This is extended to all simple predictable processes by the linearity of H · X in H.
A real valued process X is a semimartingale if it is càdlàg, adapted, and for every t ≥ 0,
$\left\{H\cdot X_{t}:H{\rm {\ is\ simple\ predictable\ and\ }}|H|\leq 1\right\}$
is bounded in probability. The Bichteler-Dellacherie Theorem states that these two definitions are equivalent (Protter 2004, p. 144).
Examples
• Adapted and continuously differentiable processes are continuous finite variation processes, and hence semimartingales.
• Brownian motion is a semimartingale.
• All càdlàg martingales, submartingales and supermartingales are semimartingales.
• Itō processes, which satisfy a stochastic differential equation of the form dX = σdW + μdt are semimartingales. Here, W is a Brownian motion and σ, μ are adapted processes.
• Every Lévy process is a semimartingale.
Although most continuous and adapted processes studied in the literature are semimartingales, this is not always the case.
• Fractional Brownian motion with Hurst parameter H ≠ 1/2 is not a semimartingale.
Properties
• The semimartingales form the largest class of processes for which the Itō integral can be defined.
• Linear combinations of semimartingales are semimartingales.
• Products of semimartingales are semimartingales, which is a consequence of the integration by parts formula for the Itō integral.
• The quadratic variation exists for every semimartingale.
• The class of semimartingales is closed under optional stopping, localization, change of time and absolutely continuous change of measure.
• If X is an Rm valued semimartingale and f is a twice continuously differentiable function from Rm to Rn, then f(X) is a semimartingale. This is a consequence of Itō's lemma.
• The property of being a semimartingale is preserved under shrinking the filtration. More precisely, if X is a semimartingale with respect to the filtration Ft, and is adapted with respect to the subfiltration Gt, then X is a Gt-semimartingale.
• (Jacod's Countable Expansion) The property of being a semimartingale is preserved under enlarging the filtration by a countable set of disjoint sets. Suppose that Ft is a filtration, and Gt is the filtration generated by Ft and a countable set of disjoint measurable sets. Then, every Ft-semimartingale is also a Gt-semimartingale. (Protter 2004, p. 53)
Semimartingale decompositions
By definition, every semimartingale is a sum of a local martingale and a finite variation process. However, this decomposition is not unique.
Continuous semimartingales
A continuous semimartingale uniquely decomposes as X = M + A where M is a continuous local martingale and A is a continuous finite variation process starting at zero. (Rogers & Williams 1987, p. 358)
For example, if X is an Itō process satisfying the stochastic differential equation dXt = σt dWt + bt dt, then
$M_{t}=X_{0}+\int _{0}^{t}\sigma _{s}\,dW_{s},\ A_{t}=\int _{0}^{t}b_{s}\,ds.$
Special semimartingales
A special semimartingale is a real valued process $X$ with the decomposition $X=M^{X}+B^{X}$, where $M^{X}$ is a local martingale and $B^{X}$ is a predictable finite variation process starting at zero. If this decomposition exists, then it is unique up to a P-null set.
Every special semimartingale is a semimartingale. Conversely, a semimartingale is a special semimartingale if and only if the process Xt* ≡ sups ≤ t |Xs| is locally integrable (Protter 2004, p. 130).
For example, every continuous semimartingale is a special semimartingale, in which case M and A are both continuous processes.
Multiplicative decompositions
Recall that ${\mathcal {E}}(X)$ denotes the stochastic exponential of semimartingale $X$. If $X$ is a special semimartingale such that $\Delta B^{X}\neq -1$, then ${\mathcal {E}}(B^{X})\neq 0$ and ${\mathcal {E}}(X)/{\mathcal {E}}(B^{X})={\mathcal {E}}\left(\int _{0}^{\cdot }{\frac {M_{u}^{X}}{1+\Delta B_{u}^{X}}}\right)$ is a local martingale.[1] Process ${\mathcal {E}}(B^{X})$ is called the multiplicative compensator of ${\mathcal {E}}(X)$ and the identity ${\mathcal {E}}(X)={\mathcal {E}}\left(\int _{0}^{\cdot }{\frac {M_{u}^{X}}{1+\Delta B_{u}^{X}}}\right){\mathcal {E}}(B^{X})$ the multiplicative decomposition of ${\mathcal {E}}(X)$.
Purely discontinuous semimartingales / quadratic pure-jump semimartingales
A semimartingale is called purely discontinuous (Kallenberg 2002) if its quadratic variation [X] is a finite variation pure-jump process, i.e.,
$[X]_{t}=\sum _{s\leq t}(\Delta X_{s})^{2}$.
By this definition, time is a purely discontinuous semimartingale even though it exhibits no jumps at all. Alternative (and preferred) terminology quadratic pure-jump semimartingale (Protter 2004, p. 71) refers to the fact that the quadratic variation of a purely discontinuous semimartingale is a pure jump process. Every finite variation semimartingale is a quadratic pure-jump semimartingale. An adapted continuous process is a quadratic pure-jump semimartingale if and only if it is of finite variation.
For every semimartingale X there is a unique continuous local martingale $X^{c}$ starting at zero such that $X-X^{c}$ is a quadratic pure-jump semimartingale (He, Wang & Yan 1992, p. 209; Kallenberg 2002, p. 527). The local martingale $X^{c}$ is called the continuous martingale part of X.
Observe that $X^{c}$ is measure-specific. If $P$ and $Q$ are two equivalent measures then $X^{c}(P)$ is typically different from $X^{c}(Q)$, while both $X-X^{c}(P)$ and $X-X^{c}(Q)$ are quadratic pure-jump semimartingales. By Girsanov's theorem $X^{c}(P)-X^{c}(Q)$ is a continuous finite variation process, yielding $[X^{c}(P)]=[X^{c}(Q)]=[X]-\sum _{s\leq \cdot }(\Delta X_{s})^{2}$.
Continuous-time and discrete-time components of a semimartingale
Every semimartingale $X$ has a unique decomposition
$X=X_{0}+X^{\mathrm {qc} }+X^{\mathrm {dp} },$
where $X_{0}^{\mathrm {qc} }=X_{0}^{\mathrm {dp} }=0$, the continuous-time component $X^{\mathrm {qc} }$ does not jump at predictable times, and the discrete-time component $X^{\mathrm {dp} }$ is equal to the sum of its jumps at predictable times in the semimartingale topology. One then has $[X^{\mathrm {qc} },X^{\mathrm {dp} }]=0$.[2] Typical examples of the continuous-time component are Itô process and Lévy process. The discrete-time component is often taken to be a Markov chain but in general the predictable jump times may not be well ordered, i.e., in principle $X^{\mathrm {dp} }$ may jump at every rational time. Observe also that $X^{\mathrm {dp} }$ is not necessarily of finite variation, even though it is equal to the sum of its jumps (in the semimartingale topology). For example, on the time interval $[0,\infty )$ take $X^{\mathrm {dp} }$ to have independent increments, with jumps at times $\{\tau _{n}=2-1/n\}_{n\in \mathbb {N} }$ taking values $\pm 1/n$ with equal probability.
Semimartingales on a manifold
The concept of semimartingales, and the associated theory of stochastic calculus, extends to processes taking values in a differentiable manifold. A process X on the manifold M is a semimartingale if f(X) is a semimartingale for every smooth function f from M to R. (Rogers 1987, p. 24) harv error: no target: CITEREFRogers1987 (help) Stochastic calculus for semimartingales on general manifolds requires the use of the Stratonovich integral.
See also
• Sigma-martingale
References
1. Lépingle, Dominique; Mémin, Jean (1978). "Sur l'integrabilité uniforme des martingales exponentielles". Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete (in French). 42 (3). Proposition II.1. doi:10.1007/BF00641409. ISSN 0044-3719.
2. Černý, Aleš; Ruf, Johannes (2021-11-01). "Pure-jump semimartingales". Bernoulli. 27 (4): 2631. doi:10.3150/21-BEJ1325. ISSN 1350-7265.
• He, Sheng-wu; Wang, Jia-gang; Yan, Jia-an (1992), Semimartingale Theory and Stochastic Calculus, Science Press, CRC Press Inc., ISBN 0-8493-7715-3
• Kallenberg, Olav (2002), Foundations of Modern Probability (2nd ed.), Springer, ISBN 0-387-95313-2
• Protter, Philip E. (2004), Stochastic Integration and Differential Equations (2nd ed.), Springer, ISBN 3-540-00313-4
• Rogers, L.C.G.; Williams, David (1987), Diffusions, Markov Processes, and Martingales, vol. 2, John Wiley & Sons Ltd, ISBN 0-471-91482-7
• Karandikar, Rajeeva L.; Rao, B.V. (2018), Introduction to Stochastic Calculus, Springer Ltd, ISBN 978-981-10-8317-4
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
|
Wikipedia
|
Special case
In logic, especially as applied in mathematics, concept A is a special case or specialization of concept B precisely if every instance of A is also an instance of B but not vice versa, or equivalently, if B is a generalization of A. A limiting case is a type of special case which is arrived at by taking some aspect of the concept to the extreme of what is permitted in the general case. A degenerate case is a special case which is in some way qualitatively different from almost all of the cases allowed.
Examples
Special case examples include the following:
• All squares are rectangles (but not all rectangles are squares); therefore the square is a special case of the rectangle.
• Fermat's Last Theorem, that an + bn = cn has no solutions in positive integers with n > 2, is a special case of Beal's conjecture, that ax + by = cz has no primitive solutions in positive integers with x, y, and z all greater than 2, specifically, the case of x = y = z.
|
Wikipedia
|
Specialization (pre)order
In the branch of mathematics known as topology, the specialization (or canonical) preorder is a natural preorder on the set of the points of a topological space. For most spaces that are considered in practice, namely for all those that satisfy the T0 separation axiom, this preorder is even a partial order (called the specialization order). On the other hand, for T1 spaces the order becomes trivial and is of little interest.
The specialization order is often considered in applications in computer science, where T0 spaces occur in denotational semantics. The specialization order is also important for identifying suitable topologies on partially ordered sets, as is done in order theory.
Definition and motivation
Consider any topological space X. The specialization preorder ≤ on X relates two points of X when one lies in the closure of the other. However, various authors disagree on which 'direction' the order should go. What is agreed is that if
x is contained in cl{y},
(where cl{y} denotes the closure of the singleton set {y}, i.e. the intersection of all closed sets containing {y}), we say that x is a specialization of y and that y is a generalization of x; this is commonly written y ⤳ x.
Unfortunately, the property "x is a specialization of y" is alternatively written as "x ≤ y" and as "y ≤ x" by various authors (see, respectively, [1] and [2]).
Both definitions have intuitive justifications: in the case of the former, we have
x ≤ y if and only if cl{x} ⊆ cl{y}.
However, in the case where our space X is the prime spectrum Spec R of a commutative ring R (which is the motivational situation in applications related to algebraic geometry), then under our second definition of the order, we have
y ≤ x if and only if y ⊆ x as prime ideals of the ring R.
For the sake of consistency, for the remainder of this article we will take the first definition, that "x is a specialization of y" be written as x ≤ y. We then see,
x ≤ y if and only if x is contained in all closed sets that contain y.
x ≤ y if and only if y is contained in all open sets that contain x.
These restatements help to explain why one speaks of a "specialization": y is more general than x, since it is contained in more open sets. This is particularly intuitive if one views closed sets as properties that a point x may or may not have. The more closed sets contain a point, the more properties the point has, and the more special it is. The usage is consistent with the classical logical notions of genus and species; and also with the traditional use of generic points in algebraic geometry, in which closed points are the most specific, while a generic point of a space is one contained in every nonempty open subset. Specialization as an idea is applied also in valuation theory.
The intuition of upper elements being more specific is typically found in domain theory, a branch of order theory that has ample applications in computer science.
Upper and lower sets
Let X be a topological space and let ≤ be the specialization preorder on X. Every open set is an upper set with respect to ≤ and every closed set is a lower set. The converses are not generally true. In fact, a topological space is an Alexandrov-discrete space if and only if every upper set is also open (or equivalently every lower set is also closed).
Let A be a subset of X. The smallest upper set containing A is denoted ↑A and the smallest lower set containing A is denoted ↓A. In case A = {x} is a singleton one uses the notation ↑x and ↓x. For x ∈ X one has:
• ↑x = {y ∈ X : x ≤ y} = ∩{open sets containing x}.
• ↓x = {y ∈ X : y ≤ x} = ∩{closed sets containing x} = cl{x}.
The lower set ↓x is always closed; however, the upper set ↑x need not be open or closed. The closed points of a topological space X are precisely the minimal elements of X with respect to ≤.
Examples
• In the Sierpinski space {0,1} with open sets {∅, {1}, {0,1}} the specialization order is the natural one (0 ≤ 0, 0 ≤ 1, and 1 ≤ 1).
• If p, q are elements of Spec(R) (the spectrum of a commutative ring R) then p ≤ q if and only if q ⊆ p (as prime ideals). Thus the closed points of Spec(R) are precisely the maximal ideals.
Important properties
As suggested by the name, the specialization preorder is a preorder, i.e. it is reflexive and transitive.
The equivalence relation determined by the specialization preorder is just that of topological indistinguishability. That is, x and y are topologically indistinguishable if and only if x ≤ y and y ≤ x. Therefore, the antisymmetry of ≤ is precisely the T0 separation axiom: if x and y are indistinguishable then x = y. In this case it is justified to speak of the specialization order.
On the other hand, the symmetry of specialization preorder is equivalent to the R0 separation axiom: x ≤ y if and only if x and y are topologically indistinguishable. It follows that if the underlying topology is T1, then the specialization order is discrete, i.e. one has x ≤ y if and only if x = y. Hence, the specialization order is of little interest for T1 topologies, especially for all Hausdorff spaces.
Any continuous function $f$ between two topological spaces is monotone with respect to the specialization preorders of these spaces: $x\leq y$ implies $f(x)\leq f(y).$ The converse, however, is not true in general. In the language of category theory, we then have a functor from the category of topological spaces to the category of preordered sets that assigns a topological space its specialization preorder. This functor has a left adjoint, which places the Alexandrov topology on a preordered set.
There are spaces that are more specific than T0 spaces for which this order is interesting: the sober spaces. Their relationship to the specialization order is more subtle:
For any sober space X with specialization order ≤, we have
• (X, ≤) is a directed complete partial order, i.e. every directed subset S of (X, ≤) has a supremum sup S,
• for every directed subset S of (X, ≤) and every open set O, if sup S is in O, then S and O have non-empty intersection.
One may describe the second property by saying that open sets are inaccessible by directed suprema. A topology is order consistent with respect to a certain order ≤ if it induces ≤ as its specialization order and it has the above property of inaccessibility with respect to (existing) suprema of directed sets in ≤.
Topologies on orders
The specialization order yields a tool to obtain a preorder from every topology. It is natural to ask for the converse too: Is every preorder obtained as a specialization preorder of some topology?
Indeed, the answer to this question is positive and there are in general many topologies on a set X that induce a given order ≤ as their specialization order. The Alexandroff topology of the order ≤ plays a special role: it is the finest topology that induces ≤. The other extreme, the coarsest topology that induces ≤, is the upper topology, the least topology within which all complements of sets ↓x (for some x in X) are open.
There are also interesting topologies in between these two extremes. The finest sober topology that is order consistent in the above sense for a given order ≤ is the Scott topology. The upper topology however is still the coarsest sober order-consistent topology. In fact, its open sets are even inaccessible by any suprema. Hence any sober space with specialization order ≤ is finer than the upper topology and coarser than the Scott topology. Yet, such a space may fail to exist, that is, there exist partial orders for which there is no sober order-consistent topology. Especially, the Scott topology is not necessarily sober.
References
• M.M. Bonsangue, Topological Duality in Semantics, volume 8 of Electronic Notes in Theoretical Computer Science, 1998. Revised version of author's Ph.D. thesis. Available online, see especially Chapter 5, that explains the motivations from the viewpoint of denotational semantics in computer science. See also the author's homepage.
1. Hartshorne, Robin (1977), Algebraic geometry, New York-Heidelberg: Springer-Verlag
2. Hochster, Melvin (1969), Prime ideal structure in commutative rings (PDF), vol. 142, Trans. Amer. Math. Soc., pp. 43–60
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
|
Wikipedia
|
Baer–Specker group
In mathematics, in the field of group theory, the Baer–Specker group, or Specker group, named after Reinhold Baer and Ernst Specker, is an example of an infinite Abelian group which is a building block in the structure theory of such groups.
Definition
The Baer–Specker group is the group B = ZN of all integer sequences with componentwise addition, that is, the direct product of countably many copies of Z. It can equivalently be described as the additive group of formal power series with integer coefficients.
Properties
Reinhold Baer proved in 1937 that this group is not free abelian; Specker proved in 1950 that every countable subgroup of B is free abelian.
The group of homomorphisms from the Baer–Specker group to a free abelian group of finite rank is a free abelian group of countable rank. This provides another proof that the group is not free.[1]
See also
• Slender group
Notes
1. Blass & Göbel (1996) attribute this result to Specker (1950). They write it in the form $P^{*}\cong S$ where $P$ denotes the Baer-Specker group, the star operator gives the dual group of homomorphisms to $\mathbb {Z} $, and $S$ is the free abelian group of countable rank. They continue, "It follows that $P$ has no direct summand isomorphic to $S$", from which an immediate consequence is that $P$ is not free abelian.
References
• Baer, Reinhold (1937), "Abelian groups without elements of finite order", Duke Mathematical Journal, 3 (1): 68–122, doi:10.1215/S0012-7094-37-00308-9, hdl:10338.dmlcz/100591, MR 1545974.
• Blass, Andreas; Göbel, Rüdiger (1996), "Subgroups of the Baer-Specker group with few endomorphisms but large dual", Fundamenta Mathematicae, 149 (1): 19–29, arXiv:math/9405206, Bibcode:1994math......5206B, MR 1372355.
• Specker, Ernst (1950), "Additive Gruppen von Folgen ganzer Zahlen", Portugaliae Mathematica, 9: 131–140, MR 0039719.
• Griffith, Phillip A. (1970), Infinite Abelian group theory, Chicago Lectures in Mathematics, University of Chicago Press, pp. 1, 111–112, ISBN 0-226-30870-7.
• Cornelius, E. F., Jr. (2009), "Endomorphisms and product bases of the Baer-Specker group", Int'l J Math and Math Sciences, 2009, article 396475, https://www.hindawi.com/journals/ijmms/
External links
• Stefan Schröer, Baer's Result: The Infinite Product of the Integers Has No Basis
|
Wikipedia
|
Specker sequence
In computability theory, a Specker sequence is a computable, monotonically increasing, bounded sequence of rational numbers whose supremum is not a computable real number. The first example of such a sequence was constructed by Ernst Specker (1949).
The existence of Specker sequences has consequences for computable analysis. The fact that such sequences exist means that the collection of all computable real numbers does not satisfy the least upper bound principle of real analysis, even when considering only computable sequences. A common way to resolve this difficulty is to consider only sequences that are accompanied by a modulus of convergence; no Specker sequence has a computable modulus of convergence. More generally, a Specker sequence is called a recursive counterexample to the least upper bound principle, i.e. a construction that shows that this theorem is false when restricted to computable reals.
The least upper bound principle has also been analyzed in the program of reverse mathematics, where the exact strength of this principle has been determined. In the terminology of that program, the least upper bound principle is equivalent to ACA0 over RCA0. In fact, the proof of the forward implication, i.e. that the least upper bound principle implies ACA0, is readily obtained from the textbook proof (see Simpson, 1999) of the non-computability of the supremum in the least upper bound principle.
Construction
The following construction is described by Kushner (1984). Let A be any recursively enumerable set of natural numbers that is not decidable, and let (ai) be a computable enumeration of A without repetition. Define a sequence (qn) of rational numbers with the rule
$q_{n}=\sum _{i=0}^{n}2^{-a_{i}-1}.$
It is clear that each qn is nonnegative and rational, and that the sequence qn is strictly increasing. Moreover, because ai has no repetition, it is possible to estimate each qn against the series
$\sum _{i=0}^{\infty }2^{-i-1}=1.$
Thus the sequence (qn) is bounded above by 1. Classically, this means that qn has a supremum x.
It is shown that x is not a computable real number. The proof uses a particular fact about computable real numbers. If x were computable then there would be a computable function r(n) such that |qj - qi| < 1/n for all i,j > r(n). To compute r, compare the binary expansion of x with the binary expansion of qi for larger and larger values of i. The definition of qi causes a single binary digit to go from 0 to 1 each time i increases by 1. Thus there will be some n where a large enough initial segment of x has already been determined by qn that no additional binary digits in that segment could ever be turned on, which leads to an estimate on the distance between qi and qj for i,j > n.
If any such a function r were computable, it would lead to a decision procedure for A, as follows. Given an input k, compute r(2k+1). If k were to appear in the sequence (ai), this would cause the sequence (qi) to increase by 2−k-1, but this cannot happen once all the elements of (qi) are within 2−k-1 of each other. Thus, if k is going to be enumerated into ai, it must be enumerated for a value of i less than r(2k+1). This leaves a finite number of possible places where k could be enumerated. To complete the decision procedure, check these in an effective manner and then return 0 or 1 depending on whether k is found.
See also
• Computation in the limit
References
• Douglas Bridges and Fred Richman. Varieties of Constructive Mathematics, Oxford, 1987.
• B.A. Kushner (1984), Lectures on constructive mathematical analysis, American Mathematical Society, Translations of Mathematical Monographs v. 60.
• Jakob G. Simonsen (2005), "Specker sequences revisited", Mathematical Logic Quarterly, v. 51, pp. 532–540. doi:10.1002/malq.200410048
• S. Simpson (1999), Subsystems of second-order arithmetic, Springer.
• E. Specker (1949), "Nicht konstruktiv beweisbare Sätze der Analysis" Journal of Symbolic Logic, v. 14, pp. 145–158.
|
Wikipedia
|
Spectra (mathematical association)
Spectra is a professional association of LGBTQIA+ mathematicians. It arose from a need for recognition and community for Gender and Sexual Minority mathematicians.
History
Spectra has its roots in meetups arranged at the Joint Mathematics Meetings (JMM) and a mailing list organized by Ron Buckmire.[1] It arose in reaction to the JMM being scheduled to occur in Denver in 1995 when the state of Colorado voters approved the anti-gay 1992 Colorado Amendment 2. The association's name was coined by Robert Bryant and Mike Hill and references the mathematical concept of a spectrum as well as the rainbow flag. Its first official activity was a panel at the 2015 JMM with the title "Out in Mathematics: LGBTQ Mathematicians in the Workplace."[2][3]
Out and Ally Lists
Spectra maintains a list of mathematicians who are out in the LGBTQIA+ community as well as a list of allies for the community.[4] The goal of the list is to serve as support for mathematicians at various places on their own LGBTQ+ journeys and also as a resource for people looking to learn more about the climate at various universities. Each mathematician on the list has a profile with, e.g., name, position, location, pronouns, and contact preferences. Any identity words or pronouns listed were given by the person being profiled when they completed an online form to be listed.
Activism and Engagement
Given that Spectra's origins are steeped in activism, it makes sense that the organization continues to advocate on behalf of LGBTQIA+ mathematicians. For instance, the association is working with scholarly publishers around transgender mathematicians and their names in published works.[5]
Spectra continues to also work toward increased visibility of LGBTQIA+ mathematicians, through conferences, workshops, panel discussions, and designated lectures.[6]
Board
The Spectra board includes many prominent mathematicians, including Bryant, Buckmire, Moon Duchin, Hill, Doug Lind, and Emily Riehl.[7]
References
1. Farris, Frank A. (February–March 2019). "LGBT Math - Out of the Closet". MAA Focus: 38.
2. Bryant, Robert; Buckmire, Ron; Khadjavi, Lily; Lind, Doug (June–July 2019). "The Origins of Spectra, an Organization for LGBT Mathematicians" (PDF). Notices of the American Mathematical Society. 66 (6): 875–882. doi:10.1090/noti1890. S2CID 197476698.
3. "Spectra: History". lgbtmath.org. Retrieved 2022-04-06.
4. "Spectra: Out and Ally Lists". lgbtmath.org. Retrieved 2022-04-06.
5. "Get Involved!". lgbtmath.org. Retrieved 2022-04-06.
6. "Spectra Events". lgbtmath.org. Retrieved 2022-04-06.
7. "Spectra Board".
|
Wikipedia
|
Spectrahedron
In convex geometry, a spectrahedron is a shape that can be represented as a linear matrix inequality. Alternatively, the set of n × n positive semidefinite matrices forms a convex cone in Rn × n, and a spectrahedron is a shape that can be formed by intersecting this cone with a affine subspace.
Spectrahedra are the feasible regions of semidefinite programs.[1] The images of spectrahedra under linear or affine transformations are called projected spectrahedra or spectrahedral shadows. Every spectrahedral shadow is a convex set that is also semialgebraic, but the converse (conjectured to be true until 2017) is false.[2]
An example of a spectrahedron is the spectraplex, defined as
$\mathrm {Spect} _{n}=\{X\in \mathbf {S} _{+}^{n}\mid \operatorname {Tr} (X)=1\}$,
where $\mathbf {S} _{+}^{n}$is the set of n × n positive semidefinite matrices and $\operatorname {Tr} (X)$ is the trace of the matrix $X$.[3] The spectraplex is a compact set, and can be thought of as the "semidefinite" analog of the simplex.
See also
• N-ellipse - a special case of spectrahedra.
References
1. Ramana, Motakuri; Goldman, A. J. (1995), "Some geometric results in semidefinite programming", Journal of Global Optimization, 7 (1): 33–50, CiteSeerX 10.1.1.44.1804, doi:10.1007/BF01100204.
2. Scheiderer, C. (2018-01-01). "Spectrahedral Shadows". SIAM Journal on Applied Algebra and Geometry. 2: 26–44. doi:10.1137/17m1118981.
3. Gärtner, Bernd; Matousek, Jiri (2012). Approximation Algorithms and Semidefinite Programming. Springer Science and Business Media. pp. 76. ISBN 978-3642220159.
|
Wikipedia
|
Spectral abscissa
In mathematics, the spectral abscissa of a matrix or a bounded linear operator is the greatest real part of the matrix's spectrum (its set of eigenvalues).[1] It is sometimes denoted $\alpha (A)$. As a transformation $\alpha :\mathrm {M} ^{n}\rightarrow \mathbb {R} $ :\mathrm {M} ^{n}\rightarrow \mathbb {R} } , the spectral abscissa maps a square matrix onto its largest real eigenvalue.[2]
Matrices
Let λ1, ..., λs be the (real or complex) eigenvalues of a matrix A ∈ Cn × n. Then its spectral abscissa is defined as:
$\alpha (A)=\max _{i}\{\operatorname {Re} (\lambda _{i})\}\,$
In stability theory, a continuous system represented by matrix $A$ is said to be stable if all real parts of its eigenvalues are negative, i.e. $\alpha (A)<0$.[3] Analogously, in control theory, the solution to the differential equation ${\dot {x}}=Ax$ is stable under the same condition $\alpha (A)<0$.[2]
See also
• Spectral radius
References
1. Deutsch, Emeric (1975). "The Spectral Abscissa of Partitioned Matrices" (PDF). Journal of Mathematical Analysis and Applications. 50: 66–73 – via CORE.
2. Burke, J. V.; Lewis, A. S.; Overton, M. L. "OPTIMIZING MATRIX STABILITY" (PDF). Proceedings of the American Mathematical Society. 129 (3): 1635–1642.
3. Burke, James V.; Overton, Micheal L. (1994). "DIFFERENTIAL PROPERTIES OF THE SPECTRAL ABSCISSA AND THE SPECTRAL RADIUS FOR ANALYTIC MATRIX-VALUED MAPPINGS" (PDF). Nonlinear Analysis, Theory, Methods & Applications. 23 (4): 467–488 – via Pergamon.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectral element method
In the numerical solution of partial differential equations, a topic in mathematics, the spectral element method (SEM) is a formulation of the finite element method (FEM) that uses high degree piecewise polynomials as basis functions. The spectral element method was introduced in a 1984 paper[1] by A. T. Patera. Although Patera is credited with development of the method, his work was a rediscovery of an existing method (see Development History)
Discussion
The spectral method expands the solution in trigonometric series, a chief advantage being that the resulting method is of a very high order. This approach relies on the fact that trigonometric polynomials are an orthonormal basis for $L^{2}(\Omega )$.[2] The spectral element method chooses instead a high degree piecewise polynomial basis functions, also achieving a very high order of accuracy. Such polynomials are usually orthogonal Chebyshev polynomials or very high order Lagrange polynomials over non-uniformly spaced nodes. In SEM computational error decreases exponentially as the order of approximating polynomial increases, therefore a fast convergence of solution to the exact solution is realized with fewer degrees of freedom of the structure in comparison with FEM. In structural health monitoring, FEM can be used for detecting large flaws in a structure, but as the size of the flaw is reduced there is a need to use a high-frequency wave. In order to simulate the propagation of a high-frequency wave, the FEM mesh required is very fine resulting in increased computational time. On the other hand, SEM provides good accuracy with fewer degrees of freedom. Non-uniformity of nodes helps to make the mass matrix diagonal, which saves time and memory and is also useful for adopting a central difference method (CDM). The disadvantages of SEM include difficulty in modeling complex geometry, compared to the flexibility of FEM.
Although the method can be applied with a modal piecewise orthogonal polynomial basis, it is most often implemented with a nodal tensor product Lagrange basis.[3] The method gains its efficiency by placing the nodal points at the Legendre-Gauss-Lobatto (LGL) points and performing the Galerkin method integrations with a reduced Gauss-Lobatto quadrature using the same nodes. With this combination, simplifications result such that mass lumping occurs at all nodes and a collocation procedure results at interior points.
The most popular applications of the method are in computational fluid dynamics[3] and modeling seismic wave propagation.[4]
A-priori error estimate
The classic analysis of Galerkin methods and Céa's lemma holds here and it can be shown that, if $u$ is the solution of the weak equation, $u_{N}$ is the approximate solution and $u\in H^{s+1}(\Omega )$:
$\|u-u_{N}\|_{H^{1}(\Omega )}\leqq C_{s}N^{-s}\|u\|_{H^{s+1}(\Omega )}$
where $N$ is related to the discretization of the domain (ie. element length), $C_{s}$ is independent from $N$, and $s$ is no larger than the degree of the piecewise polynomial basis. Similar results can be obtained to bound the error in stronger topologies. If $k\leq s+1$
$\|u-u_{N}\|_{H^{k}(\Omega )}\leq C_{s,k}N^{k-1-s}\|u\|_{H^{s+1}(\Omega )}$
As we increase $N$, we can also increase the degree of the basis functions. In this case, if $u$ is an analytic function:
$\|u-u_{N}\|_{H^{1}(\Omega )}\leqq C\exp(-\gamma N)$
where $\gamma $ depends only on $u$.
The Hybrid-Collocation-Galerkin possesses some superconvergence properties.[5] The LGL form of SEM is equivalent,[6] so it achieves the same superconvergence properties.
Development History
Development of the most popular LGL form of the method is normally attributed to Maday and Patera.[7] However, it was developed more than a decade earlier. First, there is the Hybrid-Collocation-Galerkin method (HCGM),[8][5] which applies collocation at the interior Lobatto points and uses a Galerkin-like integral procedure at element interfaces. The Lobatto-Galerkin method described by Young[9] is identical to SEM, while the HCGM is equivalent to these methods.[6] This earlier work is ignored in the spectral literature.
Related methods
• G-NI or SEM-NI are the most used spectral methods. The Galerkin formulation of spectral methods or spectral element methods, for G-NI or SEM-NI respectively, is modified and Gauss-Lobatto integration is used instead of integrals in the definition of the bilinear form $a(\cdot ,\cdot )$ and in the functional $F$. Their convergence is a consequence of Strang's lemma.
• SEM is a Galerkin based FEM (finite element method) with Lagrange basis (shape) functions and reduced numerical integration by Lobatto quadrature using the same nodes.
• The pseudospectral method, orthogonal collocation, differential quadrature method, and G-NI are different names for the same method. These methods employ global rather than piecewise polynomial basis functions. The extension to a piecewise FEM or SEM basis is almost trivial.[6]
• The spectral element method uses a tensor product space spanned by nodal basis functions associated with Gauss–Lobatto points. In contrast, the p-version finite element method spans a space of high order polynomials by nodeless basis functions, chosen approximately orthogonal for numerical stability. Since not all interior basis functions need to be present, the p-version finite element method can create a space that contains all polynomials up to a given degree with fewer degrees of freedom.[10] However, some speedup techniques possible in spectral methods due to their tensor-product character are no longer available. The name p-version means that accuracy is increased by increasing the order of the approximating polynomials (thus, p) rather than decreasing the mesh size, h.
• The hp finite element method (hp-FEM) combines the advantages of the h and p refinements to obtain exponential convergence rates.[11]
Notes
1. Patera, A. T. (1984). "A spectral element method for fluid dynamics - Laminar flow in a channel expansion". Journal of Computational Physics. 54 (3): 468–488. Bibcode:1984JCoPh..54..468P. doi:10.1016/0021-9991(84)90128-1.
2. Muradova, Aliki D. (2008). "The spectral method and numerical continuation algorithm for the von Kármán problem with postbuckling behaviour of solutions". Adv Comput Math. 29 (2): 179–206, 2008. doi:10.1007/s10444-007-9050-7. hdl:1885/56758. S2CID 46564029.
3. Karniadakis, G. and Sherwin, S.: Spectral/hp Element Methods for Computational Fluid Dynamics, Oxford Univ. Press, (2013), ISBN 9780199671366
4. Komatitsch, D. and Villote, J.-P.: “The Spectral Element Method: An Efficient Tool to Simulate the Seismic Response of 2D and 3D Geologic Structures,” Bull. Seismological Soc. America, 88, 2, 368-392 (1998)
5. Wheeler, M.F.: “A C0-Collocation-Finite Element Method for Two-Point Boundary Value and One Space Dimension Parabolic Problems,” SIAM J. Numer. Anal., 14, 1, 71-90 (1977)
6. Young, L.C., “Orthogonal Collocation Revisited,” Comp. Methods in Appl. Mech. and Engr. 345 (1) 1033-1076 (Mar. 2019), doi.org/10.1016/j.cma.2018.10.019
7. Maday, Y. and Patera, A. T., “Spectral Element Methods for the Incompressible Navier-Stokes Equations” In State-of-the-Art Surveys on Computational Mechanics, A.K. Noor, editor, ASME, New York (1989).
8. Diaz, J., “A Collocation-Galerkin Method for the Two-point Boundary Value Problem Using Continuous Piecewise Polynomial Spaces,” SIAM J. Num. Anal., 14 (5) 844-858 (1977) ISSN 0036-1429
9. Young, L.C., “A Finite-Element Method for Reservoir Simulation,” Soc. Petr. Engrs. J. 21(1) 115-128, (Feb. 1981), paper SPE 7413 presented Oct. 1978, doi.org/10.2118/7413-PA
10. Barna Szabó and Ivo Babuška, Finite element analysis, John Wiley & Sons, Inc., New York, 1991. ISBN 0-471-50273-1
11. P. Šolín, K. Segeth, I. Doležel: Higher-order finite element methods, Chapman & Hall/CRC Press, 2003. ISBN 1-58488-438-X
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
|
Wikipedia
|
Spectral expansion solution
In probability theory, the spectral expansion solution method is a technique for computing the stationary probability distribution of a continuous-time Markov chain whose state space is a semi-infinite lattice strip.[1] For example, an M/M/c queue where service nodes can breakdown and be repaired has a two-dimensional state space where one dimension has a finite limit and the other is unbounded. The stationary distribution vector is expressed directly (not as a transform) in terms of eigenvalues and eigenvectors of a matrix polynomial.[2][3]
References
1. Chakka, R. (1998). "Spectral expansion solution for some finite capacity queues". Annals of Operations Research. 79: 27–44. doi:10.1023/A:1018974722301.
2. Mitrani, I.; Chakka, R. (1995). "Spectral expansion solution for a class of Markov models: Application and comparison with the matrix-geometric method". Performance Evaluation. 23 (3): 241. doi:10.1016/0166-5316(94)00025-F.
3. Daigle, J.; Lucantoni, D. (1991). "Queueing systems having phase-dependent arrival and service rates". In Stewart, William J. (ed.). Numerical Solutions of Markov Chains. pp. 161–202. ISBN 9780824784058.
|
Wikipedia
|
Spectral gap
In mathematics, the spectral gap is the difference between the moduli of the two largest eigenvalues of a matrix or operator; alternately, it is sometimes taken as the smallest non-zero eigenvalue. Various theorems relate this difference to other properties of the system.
See also
• Cheeger constant (graph theory)
• Cheeger constant (Riemannian geometry)
• Eigengap
• Spectral gap (physics)
• Spectral radius
References
External links
• "Impossible-Seeming Surfaces Confirmed Decades After Conjecture". Quanta Magazine. 2022-06-02.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectral geometry
Spectral geometry is a field in mathematics which concerns relationships between geometric structures of manifolds and spectra of canonically defined differential operators. The case of the Laplace–Beltrami operator on a closed Riemannian manifold has been most intensively studied, although other Laplace operators in differential geometry have also been examined. The field concerns itself with two kinds of questions: direct problems and inverse problems.
Inverse problems seek to identify features of the geometry from information about the eigenvalues of the Laplacian. One of the earliest results of this kind was due to Hermann Weyl who used David Hilbert's theory of integral equation in 1911 to show that the volume of a bounded domain in Euclidean space can be determined from the asymptotic behavior of the eigenvalues for the Dirichlet boundary value problem of the Laplace operator. This question is usually expressed as "Can one hear the shape of a drum?", the popular phrase due to Mark Kac. A refinement of Weyl's asymptotic formula obtained by Pleijel and Minakshisundaram produces a series of local spectral invariants involving covariant differentiations of the curvature tensor, which can be used to establish spectral rigidity for a special class of manifolds. However as the example given by John Milnor tells us, the information of eigenvalues is not enough to determine the isometry class of a manifold (see isospectral). A general and systematic method due to Toshikazu Sunada gave rise to a veritable cottage industry of such examples which clarifies the phenomenon of isospectral manifolds.
Direct problems attempt to infer the behavior of the eigenvalues of a Riemannian manifold from knowledge of the geometry. The solutions to direct problems are typified by the Cheeger inequality which gives a relation between the first positive eigenvalue and an isoperimetric constant (the Cheeger constant). Many versions of the inequality have been established since Cheeger's work (by R. Brooks and P. Buser for instance).
See also
• Isospectral
• Hearing the shape of a drum
References
• Berger, Marcel; Gauduchon, Paul; Mazet, Edmond (1971), Le spectre d'une variété riemannienne, Lecture Notes in Mathematics (in French), vol. 194, Berlin-New York: Springer-Verlag.
• Sunada, Toshikazu (1985), "Riemannian coverings and isospectral manifolds", Ann. of Math., 121 (1): 169–186, doi:10.2307/1971195, JSTOR 1971195.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectral graph theory
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix.
The adjacency matrix of a simple undirected graph is a real symmetric matrix and is therefore orthogonally diagonalizable; its eigenvalues are real algebraic integers.
While the adjacency matrix depends on the vertex labeling, its spectrum is a graph invariant, although not a complete one.
Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as the Colin de Verdière number.
Cospectral graphs
Two graphs are called cospectral or isospectral if the adjacency matrices of the graphs are isospectral, that is, if the adjacency matrices have equal multisets of eigenvalues.
Cospectral graphs need not be isomorphic, but isomorphic graphs are always cospectral.
Graphs determined by their spectrum
A graph $G$ is said to be determined by its spectrum if any other graph with the same spectrum as $G$ is isomorphic to $G$.
Some first examples of families of graphs that are determined by their spectrum include:
• The complete graphs.
• The finite starlike trees.
Cospectral mates
A pair of graphs are said to be cospectral mates if they have the same spectrum, but are non-isomorphic.
The smallest pair of cospectral mates is {K1,4, C4 ∪ K1}, comprising the 5-vertex star and the graph union of the 4-vertex cycle and the single-vertex graph, as reported by Collatz and Sinogowitz[1][2] in 1957.
The smallest pair of polyhedral cospectral mates are enneahedra with eight vertices each.[3]
Finding cospectral graphs
Almost all trees are cospectral, i.e., as the number of vertices grows, the fraction of trees for which there exists a cospectral tree goes to 1.[4]
A pair of regular graphs are cospectral if and only if their complements are cospectral.[5]
A pair of distance-regular graphs are cospectral if and only if they have the same intersection array.
Cospectral graphs can also be constructed by means of the Sunada method.[6]
Another important source of cospectral graphs are the point-collinearity graphs and the line-intersection graphs of point-line geometries. These graphs are always cospectral but are often non-isomorphic.[7]
Cheeger inequality
The famous Cheeger's inequality from Riemannian geometry has a discrete analogue involving the Laplacian matrix; this is perhaps the most important theorem in spectral graph theory and one of the most useful facts in algorithmic applications. It approximates the sparsest cut of a graph through the second eigenvalue of its Laplacian.
Cheeger constant
Main article: Cheeger constant (graph theory)
The Cheeger constant (also Cheeger number or isoperimetric number) of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connected networks of computers, card shuffling, and low-dimensional topology (in particular, the study of hyperbolic 3-manifolds).
More formally, the Cheeger constant h(G) of a graph G on n vertices is defined as
$h(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial (S)|}{|S|}},$
where the minimum is over all nonempty sets S of at most n/2 vertices and ∂(S) is the edge boundary of S, i.e., the set of edges with exactly one endpoint in S.[8]
Cheeger inequality
When the graph G is d-regular, there is a relationship between h(G) and the spectral gap d − λ2 of G. An inequality due to Dodziuk[9] and independently Alon and Milman[10] states that[11]
${\frac {1}{2}}(d-\lambda _{2})\leq h(G)\leq {\sqrt {2d(d-\lambda _{2})}}.$
This inequality is closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry.
For general connected graphs that are not necessarily regular, an alternative inequality is given by Chung[12]: 35
${\frac {1}{2}}{\lambda }\leq {\mathbf {h} }(G)\leq {\sqrt {2\lambda }},$
where $\lambda $ is the least nontrivial eigenvalue of the normalized Laplacian, and ${\mathbf {h} }(G)$ is the (normalized) Cheeger constant
${\mathbf {h} }(G)=\min _{\emptyset \not =S\subset V(G)}{\frac {|\partial (S)|}{\min({\mathrm {vol} }(S),{\mathrm {vol} }({\bar {S}}))}}$
where ${\mathrm {vol} }(Y)$ is the sum of degrees of vertices in $Y$.
Hoffman–Delsarte inequality
There is an eigenvalue bound for independent sets in regular graphs, originally due to Alan J. Hoffman and Philippe Delsarte.[13]
Suppose that $G$ is a $k$-regular graph on $n$ vertices with least eigenvalue $\lambda _{\mathrm {min} }$. Then:
$\alpha (G)\leq {\frac {n}{1-{\frac {k}{\lambda _{\mathrm {min} }}}}}$
where $\alpha (G)$ denotes its independence number.
This bound has been applied to establish e.g. algebraic proofs of the Erdős–Ko–Rado theorem and its analogue for intersecting families of subspaces over finite fields.[14]
For general graphs which are not necessarily regular, a similar upper bound for the independence number can be derived by using the maximum eigenvalue $\lambda '_{max}$ of the normalized Laplacian[12] of $G$:
$\alpha (G)\leq n(1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\frac {\mathrm {maxdeg} }{\mathrm {mindeg} }}$
where ${\mathrm {maxdeg} }$ and ${\mathrm {mindeg} }$ denote the maximum and minimum degree in $G$, respectively. This a consequence of a more general inequality (pp. 109 in [12]):
${\mathrm {vol} }(X)\leq (1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\mathrm {vol} }(V(G))$
where $X$ is an independent set of vertices and ${\mathrm {vol} }(Y)$ denotes the sum of degrees of vertices in $Y$ .
Historical outline
Spectral graph theory emerged in the 1950s and 1960s. Besides graph theoretic research on the relationship between structural and spectral properties of graphs, another major source was research in quantum chemistry, but the connections between these two lines of work were not discovered until much later.[15] The 1980 monograph Spectra of Graphs[16] by Cvetković, Doob, and Sachs summarised nearly all research to date in the area. In 1988 it was updated by the survey Recent Results in the Theory of Graph Spectra.[17] The 3rd edition of Spectra of Graphs (1995) contains a summary of the further recent contributions to the subject.[15] Discrete geometric analysis created and developed by Toshikazu Sunada in the 2000s deals with spectral graph theory in terms of discrete Laplacians associated with weighted graphs,[18] and finds application in various fields, including shape analysis. In most recent years, the spectral graph theory has expanded to vertex-varying graphs often encountered in many real-life applications.[19][20][21][22]
See also
• Strongly regular graph
• Algebraic connectivity
• Algebraic graph theory
• Spectral clustering
• Spectral shape analysis
• Estrada index
• Lovász theta
• Expander graph
References
1. Collatz, L. and Sinogowitz, U. "Spektren endlicher Grafen." Abh. Math. Sem. Univ. Hamburg 21, 63–77, 1957.
2. Weisstein, Eric W. "Cospectral Graphs". MathWorld.
3. Hosoya, Haruo; Nagashima, Umpei; Hyugaji, Sachiko (1994), "Topological twin graphs. Smallest pair of isospectral polyhedral graphs with eight vertices", Journal of Chemical Information and Modeling, 34 (2): 428–431, doi:10.1021/ci00018a033.
4. Schwenk (1973), pp. 275–307.
5. Godsil, Chris (November 7, 2007). "Are Almost All Graphs Cospectral?" (PDF).
6. Sunada, Toshikazu (1985), "Riemannian coverings and isospectral manifolds", Ann. of Math., 121 (1): 169–186, doi:10.2307/1971195, JSTOR 1971195.
7. Brouwer & Haemers 2011
8. Definition 2.1 in Hoory, Linial & Wigderson (2006)
9. J.Dodziuk, Difference Equations, Isoperimetric inequality and Transience of Certain Random Walks, Trans. Amer. Math. Soc. 284 (1984), no. 2, 787-794.
10. Alon & Spencer 2011.
11. Theorem 2.4 in Hoory, Linial & Wigderson (2006)
12. Chung, Fan (1997). American Mathematical Society (ed.). Spectral Graph Theory. Providence, R. I. ISBN 0821803158. MR 1421568[first 4 chapters are available in the website]{{cite book}}: CS1 maint: postscript (link)
13. Godsil, Chris (May 2009). "Erdős-Ko-Rado Theorems" (PDF).
14. Godsil, C. D.; Meagher, Karen (2016). Erdős-Ko-Rado theorems : algebraic approaches. Cambridge, United Kingdom. ISBN 9781107128446. OCLC 935456305.{{cite book}}: CS1 maint: location missing publisher (link)
15. Eigenspaces of Graphs, by Dragoš Cvetković, Peter Rowlinson, Slobodan Simić (1997) ISBN 0-521-57352-1
16. Dragoš M. Cvetković, Michael Doob, Horst Sachs, Spectra of Graphs (1980)
17. Cvetković, Dragoš M.; Doob, Michael; Gutman, Ivan; Torgasev, A. (1988). Recent Results in the Theory of Graph Spectra. Annals of Discrete mathematics. ISBN 0-444-70361-6.
18. Sunada, Toshikazu (2008), "Discrete geometric analysis", Proceedings of Symposia in Pure Mathematics, 77: 51–86, doi:10.1090/pspum/077/2459864, ISBN 9780821844717.
19. Shuman, David I; Ricaud, Benjamin; Vandergheynst, Pierre (March 2016). "Vertex-frequency analysis on graphs". Applied and Computational Harmonic Analysis. 40 (2): 260–291. arXiv:1307.5708. doi:10.1016/j.acha.2015.02.005. ISSN 1063-5203. S2CID 16487065.
20. Stankovic, Ljubisa; Dakovic, Milos; Sejdic, Ervin (July 2017). "Vertex-Frequency Analysis: A Way to Localize Graph Spectral Components [Lecture Notes]". IEEE Signal Processing Magazine. 34 (4): 176–182. Bibcode:2017ISPM...34..176S. doi:10.1109/msp.2017.2696572. ISSN 1053-5888. S2CID 19969572.
21. Sakiyama, Akie; Watanabe, Kana; Tanaka, Yuichi (September 2016). "Spectral Graph Wavelets and Filter Banks With Low Approximation Error". IEEE Transactions on Signal and Information Processing over Networks. 2 (3): 230–245. doi:10.1109/tsipn.2016.2581303. ISSN 2373-776X. S2CID 2052898.
22. Behjat, Hamid; Richter, Ulrike; Van De Ville, Dimitri; Sornmo, Leif (2016-11-15). "Signal-Adapted Tight Frames on Graphs". IEEE Transactions on Signal Processing. 64 (22): 6017–6029. Bibcode:2016ITSP...64.6017B. doi:10.1109/tsp.2016.2591513. ISSN 1053-587X. S2CID 12844791.
• Alon; Spencer (2011), The probabilistic method, Wiley.
• Brouwer, Andries; Haemers, Willem H. (2011), Spectra of Graphs (PDF), Springer
• Hoory; Linial; Wigderson (2006), Expander graphs and their applications (PDF)
• Chung, Fan (1997). American Mathematical Society (ed.). Spectral Graph Theory. Providence, R. I. ISBN 0821803158. MR 1421568[first 4 chapters are available in the website]{{cite book}}: CS1 maint: postscript (link)
• Schwenk, A. J. (1973). "Almost All Trees are Cospectral". In Harary, Frank (ed.). New Directions in the Theory of Graphs. New York: Academic Press. ISBN 012324255X. OCLC 890297242.
External links
• Spielman, Daniel (2011). "Spectral Graph Theory" (PDF). [chapter from Combinatorial Scientific Computing]
• Spielman, Daniel (2007). "Spectral Graph Theory and its Applications". [presented at FOCS 2007 Conference]
• Spielman, Daniel (2004). "Spectral Graph Theory and its Applications". [course page and lecture notes]
|
Wikipedia
|
Spectral invariants
In symplectic geometry, the spectral invariants are invariants defined for the group of Hamiltonian diffeomorphisms of a symplectic manifold, which is closed related to Floer theory and Hofer geometry.
Arnold conjecture and Hamiltonian Floer homology
If (M, ω) is a symplectic manifold, then a smooth vector field Y on M is a Hamiltonian vector field if the contraction ω(Y, ·) is an exact 1-form (i.e., the differential of a Hamiltonian function H). A Hamiltonian diffeomorphism of a symplectic manifold (M, ω) is a diffeomorphism Φ of M which is the integral of a smooth path of Hamiltonian vector fields Yt. Vladimir Arnold conjectured that the number of fixed points of a generic Hamiltonian diffeomorphism of a compact symplectic manifold (M, ω) should be bounded from below by some topological constant of M, which is analogous to the Morse inequality. This so-called Arnold conjecture triggered the invention of Hamiltonian Floer homology by Andreas Floer in the 1980s.
Floer's definition adopted Witten's point of view on Morse theory. He considered spaces of contractible loops of M and defined an action functional AH associated to the family of Hamiltonian functions, so that the fixed points of the Hamiltonian diffeomorphism correspond to the critical points of the action functional. Constructing a chain complex similar to the Morse–Smale–Witten complex in Morse theory, Floer managed to define a homology group, which he also showed to be isomorphic to the ordinary homology groups of the manifold M.
The isomorphism between the Floer homology group HF(M) and the ordinary homology groups H(M) is canonical. Therefore, for any "good" Hamiltonian path Ht, a homology class α of M can be represented by a cycle in the Floer chain complex, formally a linear combination
$\alpha _{H}=a_{1}x_{1}+a_{2}x_{2}+\cdots $
where ai are coefficients in some ring and xi are fixed points of the corresponding Hamiltonian diffeomorphism. Formally, the spectral invariants can be defined by the min-max value
$c_{H}(\alpha )=\min \max\{A_{H}(x_{i})\ :\ a_{i}\neq 0\}.$ :\ a_{i}\neq 0\}.}
Here the maximum is taken over all the values of the action functional AH on the fixed points appeared in the linear combination of αH, and the minimum is taken over all Floer cycles that represent the class α.
|
Wikipedia
|
Spectral layout
Spectral layout is a class of algorithm for drawing graphs. The layout uses the eigenvectors of a matrix, such as the Laplace matrix of the graph, as Cartesian coordinates of the graph's vertices.
The idea of the layout is to compute the two largest (or smallest) eigenvalues and corresponding eigenvectors of the Laplacian matrix of the graph and then use those for actually placing the nodes. Usually nodes are placed in the 2 dimensional plane. An embedding into more dimensions can be found by using more eigenvectors. In the 2-dimensional case, for a given node which corresponds to the row/column $i$ in the (symmetric) Laplacian matrix $L$ of the graph, the $x$ and $y$-coordinates are the $i$-th entries of the first and second eigenvectors of $L$, respectively.
References
• Beckman, Brian (1994), Theory of Spectral Graph Layout, Tech. Report MSR-TR-94-04, Microsoft Research.
• Koren, Yehuda (2005), "Drawing graphs by eigenvectors: theory and practice", Computers & Mathematics with Applications, 49 (11–12): 1867–1888, doi:10.1016/j.camwa.2004.08.015, MR 2154691.
|
Wikipedia
|
Spectral leakage
The Fourier transform of a function of time, s(t), is a complex-valued function of frequency, S(f), often referred to as a frequency spectrum. Any linear time-invariant operation on s(t) produces a new spectrum of the form H(f)•S(f), which changes the relative magnitudes and/or angles (phase) of the non-zero values of S(f). Any other type of operation creates new frequency components that may be referred to as spectral leakage in the broadest sense. Sampling, for instance, produces leakage, which we call aliases of the original spectral component. For Fourier transform purposes, sampling is modeled as a product between s(t) and a Dirac comb function. The spectrum of a product is the convolution between S(f) and another function, which inevitably creates the new frequency components. But the term 'leakage' usually refers to the effect of windowing, which is the product of s(t) with a different kind of function, the window function. Window functions happen to have finite duration, but that is not necessary to create leakage. Multiplication by a time-variant function is sufficient.
Spectral analysis
The Fourier transform of the function cos(ωt) is zero, except at frequency ±ω. However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their spectral content only during a certain time period. In either case, the Fourier transform (or a similar transform) can be applied on one or more finite intervals of the waveform. In general, the transform is applied to the product of the waveform and a window function. Any window (including rectangular) affects the spectral estimate computed by this method.
The effects are most easily characterized by their effect on a sinusoidal s(t) function, whose unwindowed Fourier transform is zero for all but one frequency. The customary frequency of choice is 0 Hz, because the windowed Fourier transform is simply the Fourier transform of the window function itself (see § A list of window functions):
${\mathcal {F}}\{w(t)\cdot \underbrace {\cos(2\pi 0t)} _{1}\}={\mathcal {F}}\{w(t)\}.$
When both sampling and windowing are applied to s(t), in either order, the leakage caused by windowing is a relatively localized spreading of frequency components, with often a blurring effect, whereas the aliasing caused by sampling is a periodic repetition of the entire blurred spectrum.
Choice of window function
Windowing of a simple waveform like cos(ωt) causes its Fourier transform to develop non-zero values (commonly called spectral leakage) at frequencies other than ω. The leakage tends to be worst (highest) near ω and least at frequencies farthest from ω.
If the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with our ability to distinguish them spectrally. Possible types of interference are often broken down into two opposing classes as follows: If the component frequencies are dissimilar and one component is weaker, then leakage from the stronger component can obscure the weaker one's presence. But if the frequencies are too similar, leakage can render them unresolvable even when the sinusoids are of equal strength. Windows that are effective against the first type of interference, namely where components have dissimilar frequencies and amplitudes, are called high dynamic range. Conversely, windows that can distinguish components with similar frequencies and amplitudes are called high resolution.
The rectangular window is an example of a window that is high resolution but low dynamic range, meaning it is good for distinguishing components of similar amplitude even when the frequencies are also close, but poor at distinguishing components of different amplitude even when the frequencies are far away. High-resolution, low-dynamic-range windows such as the rectangular window also have the property of high sensitivity, which is the ability to reveal relatively weak sinusoids in the presence of additive random noise. That is because the noise produces a stronger response with high-dynamic-range windows than with high-resolution windows.
At the other extreme of the range of window types are windows with high dynamic range but low resolution and sensitivity. High-dynamic-range windows are most often justified in wideband applications, where the spectrum being analyzed is expected to contain many different components of various amplitudes.
In between the extremes are moderate windows, such as Hann and Hamming. They are commonly used in narrowband applications, such as the spectrum of a telephone channel.
In summary, spectral analysis involves a trade-off between resolving comparable strength components with similar frequencies (high resolution / sensitivity) and resolving disparate strength components with dissimilar frequencies (high dynamic range). That trade-off occurs when the window function is chosen.[1]: p.90
Discrete-time signals
When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function and then a discrete Fourier transform (DFT). But the DFT provides only a sparse sampling of the actual discrete-time Fourier transform (DTFT) spectrum. Figure 2, row 3 shows a DTFT for a rectangularly-windowed sinusoid. The actual frequency of the sinusoid is indicated as "13" on the horizontal axis. Everything else is leakage, exaggerated by the use of a logarithmic presentation. The unit of frequency is "DFT bins"; that is, the integer values on the frequency axis correspond to the frequencies sampled by the DFT.[2]: p.56 eq.(16) So the figure depicts a case where the actual frequency of the sinusoid coincides with a DFT sample, and the maximum value of the spectrum is accurately measured by that sample. In row 4, it misses the maximum value by ½ bin, and the resultant measurement error is referred to as scalloping loss (inspired by the shape of the peak). For a known frequency, such as a musical note or a sinusoidal test signal, matching the frequency to a DFT bin can be prearranged by choices of a sampling rate and a window length that results in an integer number of cycles within the window.
Noise bandwidth
The concepts of resolution and dynamic range tend to be somewhat subjective, depending on what the user is actually trying to do. But they also tend to be highly correlated with the total leakage, which is quantifiable. It is usually expressed as an equivalent bandwidth, B. It can be thought of as redistributing the DTFT into a rectangular shape with height equal to the spectral maximum and width B.[upper-alpha 1][3] The more the leakage, the greater the bandwidth. It is sometimes called noise equivalent bandwidth or equivalent noise bandwidth, because it is proportional to the average power that will be registered by each DFT bin when the input signal contains a random noise component (or is just random noise). A graph of the power spectrum, averaged over time, typically reveals a flat noise floor, caused by this effect. The height of the noise floor is proportional to B. So two different window functions can produce different noise floors, as seen in figures 1 and 3.
Processing gain and losses
In signal processing, operations are chosen to improve some aspect of quality of a signal by exploiting the differences between the signal and the corrupting influences. When the signal is a sinusoid corrupted by additive random noise, spectral analysis distributes the signal and noise components differently, often making it easier to detect the signal's presence or measure certain characteristics, such as amplitude and frequency. Effectively, the signal-to-noise ratio (SNR) is improved by distributing the noise uniformly, while concentrating most of the sinusoid's energy around one frequency. Processing gain is a term often used to describe an SNR improvement. The processing gain of spectral analysis depends on the window function, both its noise bandwidth (B) and its potential scalloping loss. These effects partially offset, because windows with the least scalloping naturally have the most leakage.
Figure 3 depicts the effects of three different window functions on the same data set, comprising two equal strength sinusoids in additive noise. The frequencies of the sinusoids are chosen such that one encounters no scalloping and the other encounters maximum scalloping. Both sinusoids suffer less SNR loss under the Hann window than under the Blackman-Harris window. In general (as mentioned earlier), this is a deterrent to using high-dynamic-range windows in low-dynamic-range applications.
Symmetry
The formulas provided at § A list of window functions produce discrete sequences, as if a continuous window function has been "sampled". (See an example at Kaiser window.) Window sequences for spectral analysis are either symmetric or 1-sample short of symmetric (called periodic,[4][5] DFT-even, or DFT-symmetric[2]: p.52 ). For instance, a true symmetric sequence, with its maximum at a single center-point, is generated by the MATLAB function hann(9,'symmetric'). Deleting the last sample produces a sequence identical to hann(8,'periodic'). Similarly, the sequence hann(8,'symmetric') has two equal center-points.[6]
Some functions have one or two zero-valued end-points, which are unnecessary in most applications. Deleting a zero-valued end-point has no effect on its DTFT (spectral leakage). But the function designed for N + 1 or N + 2 samples, in anticipation of deleting one or both end points, typically has a slightly narrower main lobe, slightly higher sidelobes, and a slightly smaller noise-bandwidth.[7]
DFT-symmetry
The predecessor of the DFT is the finite Fourier transform, and window functions were "always an odd number of points and exhibit even symmetry about the origin".[2]: p.52 In that case, the DTFT is entirely real-valued. When the same sequence is shifted into a DFT data window, $[0\leq n\leq N],$ the DTFT becomes complex-valued except at frequencies spaced at regular intervals of $1/N.$[lower-alpha 1] Thus, when sampled by an $N$-length DFT, the samples (called DFT coefficients) are still real-valued. An approximation is to truncate the N+1-length sequence (effectively $w[N]=0$), and compute an $N$-length DFT. The DTFT (spectral leakage) is slightly affected, but the samples remain real-valued.[8][upper-alpha 2] The terms DFT-even and periodic refer to the idea that if the truncated sequence were repeated periodically, it would be even-symmetric about $n=0,$ and its DTFT would be entirely real-valued. But the actual DTFT is generally complex-valued, except for the $N$ DFT coefficients. Spectral plots like those at § A list of window functions, are produced by sampling the DTFT at much smaller intervals than $1/N$ and displaying only the magnitude component of the complex numbers.
Periodic summation
An exact method to sample the DTFT of an N+1-length sequence at intervals of $1/N$ is described at DTFT § L=N+1. Essentially, $w[N]$ is combined with $w[0]$ (by addition), and an $N$-point DFT is done on the truncated sequence. Similarly, spectral analysis would be done by combining the $n=0$ and $n=N$ data samples before applying the truncated symmetric window. That is not a common practice, even though truncated windows are very popular.[2][9][10][11][12][13][lower-alpha 2]
Convolution
The appeal of DFT-symmetric windows is explained by the popularity of the fast Fourier transform (FFT) algorithm for implementation of the DFT, because truncation of an odd-length sequence results in an even-length sequence. Their real-valued DFT coefficients are also an advantage in certain esoteric applications[upper-alpha 3] where windowing is achieved by means of convolution between the DFT coefficients and an unwindowed DFT of the data.[14][2]: p.62 [1]: p.85 In those applications, DFT-symmetric windows (even or odd length) from the Cosine-sum family are preferred, because most of their DFT coefficients are zero-valued, making the convolution very efficient.[upper-alpha 4][1]: p.85
Some window metrics
When selecting an appropriate window function for an application, this comparison graph may be useful. The frequency axis has units of FFT "bins" when the window of length N is applied to data and a transform of length N is computed. For instance, the value at frequency ½ "bin" is the response that would be measured in bins k and k + 1 to a sinusoidal signal at frequency k + ½. It is relative to the maximum possible response, which occurs when the signal frequency is an integer number of bins. The value at frequency ½ is referred to as the maximum scalloping loss of the window, which is one metric used to compare windows. The rectangular window is noticeably worse than the others in terms of that metric.
Other metrics that can be seen are the width of the main lobe and the peak level of the sidelobes, which respectively determine the ability to resolve comparable strength signals and disparate strength signals. The rectangular window (for instance) is the best choice for the former and the worst choice for the latter. What cannot be seen from the graphs is that the rectangular window has the best noise bandwidth, which makes it a good candidate for detecting low-level sinusoids in an otherwise white noise environment. Interpolation techniques, such as zero-padding and frequency-shifting, are available to mitigate its potential scalloping loss.
See also
• § Sampling the DTFT
• Knife-edge effect, spatial analog of truncation
• Gibbs phenomenon
Notes
1. Mathematically, the noise equivalent bandwidth of transfer function H is the bandwidth of an ideal rectangular filter with the same peak gain as H that would pass the same power with white noise input. In the units of frequency f (e.g. hertz), it is given by:
$B_{\text{noise}}={\frac {1}{|H(f)|_{\max }^{2}}}\int _{0}^{\infty }|H(f)|^{2}df.$
2. An example of the effect of truncation on spectral leakage is figure Gaussian windows. The graph labeled DTFT periodic8 is the DTFT of the truncated window labeled periodic DFT-even (both blue). The green graph labeled DTFT symmetric9 corresponds to the same window with its symmetry restored. The DTFT samples, labeled DFT8 periodic summation, are an example of using periodic summation to sample it at the same frequencies as the blue graph.
3. Sometimes both a windowed and an unwindowed (rectangularly windowed) DFT are needed.
4. For example, see figures DFT-even Hann window and Odd-length, DFT-even Hann window, which show that the $N$-length DFT of the sequence generated by hann($N$,'periodic') has only three non-zero values. All the other samples coincide with zero-crossings of the DTFT.
Page citations
1. Harris 1978, p.52, where $\Delta \omega \triangleq 2\pi \Delta f.$
2. Nuttall 1981, p.85 (15a).
References
1. Nuttall, Albert H. (Feb 1981). "Some Windows with Very Good Sidelobe Behavior". IEEE Transactions on Acoustics, Speech, and Signal Processing. 29 (1): 84–91. doi:10.1109/TASSP.1981.1163506. Extends Harris' paper, covering all the window functions known at the time, along with key metric comparisons.
2. Harris, Fredric J. (Jan 1978). "On the use of Windows for Harmonic Analysis with the Discrete Fourier Transform" (PDF). Proceedings of the IEEE. 66 (1): 51–83. Bibcode:1978IEEEP..66...51H. CiteSeerX 10.1.1.649.9880. doi:10.1109/PROC.1978.10837. S2CID 426548. The fundamental 1978 paper on FFT windows by Harris, which specified many windows and introduced key metrics used to compare them.
3. Carlson, A. Bruce (1986). Communication Systems: An Introduction to Signals and Noise in Electrical Communication. McGraw-Hill. ISBN 978-0-07-009960-9.
4. "Hann (Hanning) window - MATLAB hann". www.mathworks.com. Retrieved 2020-02-12.
5. "Window Function". www.mathworks.com. Retrieved 2019-04-14.
6. Robertson, Neil (18 December 2018). "Evaluate Window Functions for the Discrete Fourier Transform". DSPRelated.com. The Related Media Group. Retrieved 9 August 2020. Revised 22 February 2020.
7. "Matlab for the Hann Window". ccrma.stanford.edu. Retrieved 2020-09-01.
8. Rohling, H.; Schuermann, J. (March 1983). "Discrete time window functions with arbitrarily low sidelobe level". Signal Processing. Forschungsinstitut Ulm, Sedanstr, Germany: AEG-Telefunken. 5 (2): 127–138. doi:10.1016/0165-1684(83)90019-1. Retrieved 8 August 2020. It can be shown, that the DFT-even sampling technique as proposed by Harris is not the most suitable one.
9. Heinzel, G.; Rüdiger, A.; Schilling, R. (2002). Spectrum and spectral density estimation by the Discrete Fourier transform (DFT), including a comprehensive list of window functions and some new flat-top windows (Technical report). Max Planck Institute (MPI) für Gravitationsphysik / Laser Interferometry & Gravitational Wave Astronomy. 395068.0. Retrieved 2013-02-10. Also available at https://pure.mpg.de/rest/items/item_152164_1/component/file_152163/content
10. Lyons, Richard (1 June 1998). "Windowing Functions Improve FFT Results". EDN. Sunnyvale, CA: TRW. Retrieved 8 August 2020.
11. Fulton, Trevor (4 March 2008). "DP Numeric Transform Toolbox". herschel.esac.esa.int. Herschel Data Processing. Retrieved 8 August 2020.
12. Poularikas, A.D. (1999). "7.3.1". In Poularikas, Alexander D. (ed.). The Handbook of Formulas and Tables for Signal Processing (PDF). Boca Raton: CRC Press LLC. ISBN 0849385792. Retrieved 8 August 2020. Windows are even (about the origin) sequences with an odd number of points. The right-most point of the window will be discarded.
13. Puckette, Miller (30 December 2006). "Fourier analysis of non-periodic signals". msp.ucsd.edu. UC San Diego. Retrieved 9 August 2020.
14. US patent 6898235, Carlin,Joe; Collins,Terry & Hays,Peter et al., "Wideband communication intercept and direction finding device using hyperchannelization", published 1999-12-10, issued 2005-05-24, also available at https://patentimages.storage.googleapis.com/4d/39/2a/cec2ae6f33c1e7/US6898235.pdf
|
Wikipedia
|
Modal matrix
In linear algebra, the modal matrix is used in the diagonalization process involving eigenvalues and eigenvectors.[1]
Specifically the modal matrix $M$ for the matrix $A$ is the n × n matrix formed with the eigenvectors of $A$ as columns in $M$. It is utilized in the similarity transformation
$D=M^{-1}AM,$
where $D$ is an n × n diagonal matrix with the eigenvalues of $A$ on the main diagonal of $D$ and zeros elsewhere. The matrix $D$ is called the spectral matrix for $A$. The eigenvalues must appear left to right, top to bottom in the same order as their corresponding eigenvectors are arranged left to right in $M$.[2]
Example
The matrix
$A={\begin{pmatrix}3&2&0\\2&0&0\\1&0&2\end{pmatrix}}$
has eigenvalues and corresponding eigenvectors
$\lambda _{1}=-1,\quad \,\mathbf {b} _{1}=\left(-3,6,1\right),$
$\lambda _{2}=2,\qquad \mathbf {b} _{2}=\left(0,0,1\right),$
$\lambda _{3}=4,\qquad \mathbf {b} _{3}=\left(2,1,1\right).$
A diagonal matrix $D$, similar to $A$ is
$D={\begin{pmatrix}-1&0&0\\0&2&0\\0&0&4\end{pmatrix}}.$
One possible choice for an invertible matrix $M$ such that $D=M^{-1}AM,$ is
$M={\begin{pmatrix}-3&0&2\\6&0&1\\1&1&1\end{pmatrix}}.$[3]
Note that since eigenvectors themselves are not unique, and since the columns of both $M$ and $D$ may be interchanged, it follows that both $M$ and $D$ are not unique.[4]
Generalized modal matrix
Let $A$ be an n × n matrix. A generalized modal matrix $M$ for $A$ is an n × n matrix whose columns, considered as vectors, form a canonical basis for $A$ and appear in $M$ according to the following rules:
• All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of $M$.
• All vectors of one chain appear together in adjacent columns of $M$.
• Each chain appears in $M$ in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.).[5]
One can show that
$AM=MJ,$
(1)
where $J$ is a matrix in Jordan normal form. By premultiplying by $M^{-1}$, we obtain
$J=M^{-1}AM.$
(2)
Note that when computing these matrices, equation (1) is the easiest of the two equations to verify, since it does not require inverting a matrix.[6]
Example
This example illustrates a generalized modal matrix with four Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order.[7] The matrix
$A={\begin{pmatrix}-1&0&-1&1&1&3&0\\0&1&0&0&0&0&0\\2&1&2&-1&-1&-6&0\\-2&0&-1&2&1&3&0\\0&0&0&0&1&0&0\\0&0&0&0&0&1&0\\-1&-1&0&1&2&4&1\end{pmatrix}}$
has a single eigenvalue $\lambda _{1}=1$ with algebraic multiplicity $\mu _{1}=7$. A canonical basis for $A$ will consist of one linearly independent generalized eigenvector of rank 3 (generalized eigenvector rank; see generalized eigenvector), two of rank 2 and four of rank 1; or equivalently, one chain of three vectors $\left\{\mathbf {x} _{3},\mathbf {x} _{2},\mathbf {x} _{1}\right\}$, one chain of two vectors $\left\{\mathbf {y} _{2},\mathbf {y} _{1}\right\}$, and two chains of one vector $\left\{\mathbf {z} _{1}\right\}$, $\left\{\mathbf {w} _{1}\right\}$.
An "almost diagonal" matrix $J$ in Jordan normal form, similar to $A$ is obtained as follows:
$M={\begin{pmatrix}\mathbf {z} _{1}&\mathbf {w} _{1}&\mathbf {x} _{1}&\mathbf {x} _{2}&\mathbf {x} _{3}&\mathbf {y} _{1}&\mathbf {y} _{2}\end{pmatrix}}={\begin{pmatrix}0&1&-1&0&0&-2&1\\0&3&0&0&1&0&0\\-1&1&1&1&0&2&0\\-2&0&-1&0&0&-2&0\\1&0&0&0&0&0&0\\0&1&0&0&0&0&0\\0&0&0&-1&0&-1&0\end{pmatrix}},$
$J={\begin{pmatrix}1&0&0&0&0&0&0\\0&1&0&0&0&0&0\\0&0&1&1&0&0&0\\0&0&0&1&1&0&0\\0&0&0&0&1&0&0\\0&0&0&0&0&1&1\\0&0&0&0&0&0&1\end{pmatrix}},$
where $M$ is a generalized modal matrix for $A$, the columns of $M$ are a canonical basis for $A$, and $AM=MJ$.[8] Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both $M$ and $J$ may be interchanged, it follows that both $M$ and $J$ are not unique.[9]
Notes
1. Bronson (1970, pp. 179–183)
2. Bronson (1970, p. 181)
3. Beauregard & Fraleigh (1973, pp. 271, 272)
4. Bronson (1970, p. 181)
5. Bronson (1970, p. 205)
6. Bronson (1970, pp. 206–207)
7. Nering (1970, pp. 122, 123)
8. Bronson (1970, pp. 208, 209)
9. Bronson (1970, p. 206)
References
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X
• Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490
• Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646
|
Wikipedia
|
Spectral method
Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functions" (for example, as a Fourier series which is a sum of sinusoids) and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible.
Spectral methods and finite element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are generally nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains (compact support). Consequently, spectral methods connect variables globally while finite elements do so locally. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the fastest possible, when the solution is smooth. However, there are no known three-dimensional single domain spectral shock capturing results (shock waves are not smooth).[1] In the finite element community, a method where the degree of the elements is very high or increases as the grid parameter h increases is sometimes called a spectral element method.
Spectral methods can be used to solve differential equations (PDEs, ODEs, eigenvalue, etc) and optimization problems. When applying spectral methods to time-dependent PDEs, the solution is typically written as a sum of basis functions with time-dependent coefficients; substituting this in the PDE yields a system of ODEs in the coefficients which can be solved using any numerical method for ODEs. Eigenvalue problems for ODEs are similarly converted to matrix eigenvalue problems .
Spectral methods were developed in a long series of papers by Steven Orszag starting in 1969 including, but not limited to, Fourier series methods for periodic geometry problems, polynomial spectral methods for finite and unbounded geometry problems, pseudospectral methods for highly nonlinear problems, and spectral iteration methods for fast solution of steady-state problems. The implementation of the spectral method is normally accomplished either with collocation or a Galerkin or a Tau approach . For very small problems, the spectral method is unique in that solutions may be written out symbolically, yielding a practical alternative to series solutions for differential equations.
Spectral methods can be computationally less expensive and easier to implement than finite element methods; they shine best when high accuracy is sought in simple domains with smooth solutions. However, because of their global nature, the matrices associated with step computation are dense and computational efficiency will quickly suffer when there are many degrees of freedom (with some exceptions, for example if matrix applications can be written as Fourier transforms). For larger problems and nonsmooth solutions, finite elements will generally work better due to sparse matrices and better modelling of discontinuities and sharp bends.
Examples of spectral methods
A concrete, linear example
Here we presume an understanding of basic multivariate calculus and Fourier series. If $g(x,y)$ is a known, complex-valued function of two real variables, and g is periodic in x and y (that is, $g(x,y)=g(x+2\pi ,y)=g(x,y+2\pi )$) then we are interested in finding a function f(x,y) so that
$\left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)f(x,y)=g(x,y)\quad {\text{for all }}x,y$
where the expression on the left denotes the second partial derivatives of f in x and y, respectively. This is the Poisson equation, and can be physically interpreted as some sort of heat conduction problem, or a problem in potential theory, among other possibilities.
If we write f and g in Fourier series:
$f=:\sum a_{j,k}e^{i(jx+ky)}$
$g=:\sum b_{j,k}e^{i(jx+ky)}$
and substitute into the differential equation, we obtain this equation:
$\sum -a_{j,k}(j^{2}+k^{2})e^{i(jx+ky)}=\sum b_{j,k}e^{i(jx+ky)}$
We have exchanged partial differentiation with an infinite sum, which is legitimate if we assume for instance that f has a continuous second derivative. By the uniqueness theorem for Fourier expansions, we must then equate the Fourier coefficients term by term, giving
$a_{j,k}=-{\frac {b_{j,k}}{j^{2}+k^{2}}}$
(*)
which is an explicit formula for the Fourier coefficients aj,k.
With periodic boundary conditions, the Poisson equation possesses a solution only if b0,0 = 0. Therefore, we can freely choose a0,0 which will be equal to the mean of the resolution. This corresponds to choosing the integration constant.
To turn this into an algorithm, only finitely many frequencies are solved for. This introduces an error which can be shown to be proportional to $h^{n}$, where $h:=1/n$ and $n$ is the highest frequency treated.
Algorithm
1. Compute the Fourier transform (bj,k) of g.
2. Compute the Fourier transform (aj,k) of f via the formula (*).
3. Compute f by taking an inverse Fourier transform of (aj,k).
Since we're only interested in a finite window of frequencies (of size n, say) this can be done using a fast Fourier transform algorithm. Therefore, globally the algorithm runs in time O(n log n).
Nonlinear example
We wish to solve the forced, transient, nonlinear Burgers' equation using a spectral approach.
Given $u(x,0)$ on the periodic domain $x\in \left[0,2\pi \right)$, find $u\in {\mathcal {U}}$ such that
$\partial _{t}u+u\partial _{x}u=\rho \partial _{xx}u+f\quad \forall x\in \left[0,2\pi \right),\forall t>0$
where ρ is the viscosity coefficient. In weak conservative form this becomes
$\left\langle \partial _{t}u,v\right\rangle =\left\langle \partial _{x}\left(-{\frac {1}{2}}u^{2}+\rho \partial _{x}u\right),v\right\rangle +\left\langle f,v\right\rangle \quad \forall v\in {\mathcal {V}},\forall t>0$
where following inner product notation. Integrating by parts and using periodicity grants
$\langle \partial _{t}u,v\rangle =\left\langle {\frac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}v\right\rangle +\left\langle f,v\right\rangle \quad \forall v\in {\mathcal {V}},\forall t>0.$
To apply the Fourier–Galerkin method, choose both
${\mathcal {U}}^{N}:=\left\{u:u(x,t)=\sum _{k=-N/2}^{N/2-1}{\hat {u}}_{k}(t)e^{ikx}\right\}$
and
${\mathcal {V}}^{N}:=\operatorname {span} \left\{e^{ikx}:k\in -N/2,\dots ,N/2-1\right\}$
where ${\hat {u}}_{k}(t):={\frac {1}{2\pi }}\langle u(x,t),e^{ikx}\rangle $. This reduces the problem to finding $u\in {\mathcal {U}}^{N}$ such that
$\langle \partial _{t}u,e^{ikx}\rangle =\left\langle {\frac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}e^{ikx}\right\rangle +\left\langle f,e^{ikx}\right\rangle \quad \forall k\in \left\{-N/2,\dots ,N/2-1\right\},\forall t>0.$
Using the orthogonality relation $\langle e^{ilx},e^{ikx}\rangle =2\pi \delta _{lk}$ where $\delta _{lk}$ is the Kronecker delta, we simplify the above three terms for each $k$ to see
${\begin{aligned}\left\langle \partial _{t}u,e^{ikx}\right\rangle &=\left\langle \partial _{t}\sum _{l}{\hat {u}}_{l}e^{ilx},e^{ikx}\right\rangle =\left\langle \sum _{l}\partial _{t}{\hat {u}}_{l}e^{ilx},e^{ikx}\right\rangle =2\pi \partial _{t}{\hat {u}}_{k},\\\left\langle f,e^{ikx}\right\rangle &=\left\langle \sum _{l}{\hat {f}}_{l}e^{ilx},e^{ikx}\right\rangle =2\pi {\hat {f}}_{k},{\text{ and}}\\\left\langle {\frac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}e^{ikx}\right\rangle &=\left\langle {\frac {1}{2}}\left(\sum _{p}{\hat {u}}_{p}e^{ipx}\right)\left(\sum _{q}{\hat {u}}_{q}e^{iqx}\right)-\rho \partial _{x}\sum _{l}{\hat {u}}_{l}e^{ilx},\partial _{x}e^{ikx}\right\rangle \\&=\left\langle {\frac {1}{2}}\sum _{p}\sum _{q}{\hat {u}}_{p}{\hat {u}}_{q}e^{i\left(p+q\right)x},ike^{ikx}\right\rangle -\left\langle \rho i\sum _{l}l{\hat {u}}_{l}e^{ilx},ike^{ikx}\right\rangle \\&=-{\frac {ik}{2}}\left\langle \sum _{p}\sum _{q}{\hat {u}}_{p}{\hat {u}}_{q}e^{i\left(p+q\right)x},e^{ikx}\right\rangle -\rho k\left\langle \sum _{l}l{\hat {u}}_{l}e^{ilx},e^{ikx}\right\rangle \\&=-i\pi k\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-2\pi \rho {}k^{2}{\hat {u}}_{k}.\end{aligned}}$
Assemble the three terms for each $k$ to obtain
$2\pi \partial _{t}{\hat {u}}_{k}=-i\pi k\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-2\pi \rho {}k^{2}{\hat {u}}_{k}+2\pi {\hat {f}}_{k}\quad k\in \left\{-N/2,\dots ,N/2-1\right\},\forall t>0.$
Dividing through by $2\pi $, we finally arrive at
$\partial _{t}{\hat {u}}_{k}=-{\frac {ik}{2}}\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-\rho {}k^{2}{\hat {u}}_{k}+{\hat {f}}_{k}\quad k\in \left\{-N/2,\dots ,N/2-1\right\},\forall t>0.$
With Fourier transformed initial conditions ${\hat {u}}_{k}(0)$ and forcing ${\hat {f}}_{k}(t)$, this coupled system of ordinary differential equations may be integrated in time (using, e.g., a Runge Kutta technique) to find a solution. The nonlinear term is a convolution, and there are several transform-based techniques for evaluating it efficiently. See the references by Boyd and Canuto et al. for more details.
A relationship with the spectral element method
One can show that if $g$ is infinitely differentiable, then the numerical algorithm using Fast Fourier Transforms will converge faster than any polynomial in the grid size h. That is, for any n>0, there is a $C_{n}<\infty $ such that the error is less than $C_{n}h^{n}$ for all sufficiently small values of $h$. We say that the spectral method is of order $n$, for every n>0.
Because a spectral element method is a finite element method of very high order, there is a similarity in the convergence properties. However, whereas the spectral method is based on the eigendecomposition of the particular boundary value problem, the finite element method does not use that information and works for arbitrary elliptic boundary value problems.
See also
• Finite element method
• Gaussian grid
• Pseudo-spectral method
• Spectral element method
• Galerkin method
• Collocation method
References
1. pp 235, Spectral Methods: evolution to complex geometries and applications to fluid dynamics, By Canuto, Hussaini, Quarteroni and Zang, Springer, 2007.
• Bengt Fornberg (1996) A Practical Guide to Pseudospectral Methods. Cambridge University Press, Cambridge, UK
• Chebyshev and Fourier Spectral Methods by John P. Boyd.
• Canuto C., Hussaini M. Y., Quarteroni A., and Zang T.A. (2006) Spectral Methods. Fundamentals in Single Domains. Springer-Verlag, Berlin Heidelberg
• Javier de Frutos, Julia Novo (2000): A Spectral Element Method for the Navier–Stokes Equations with Improved Accuracy
• Polynomial Approximation of Differential Equations, by Daniele Funaro, Lecture Notes in Physics, Volume 8, Springer-Verlag, Heidelberg 1992
• D. Gottlieb and S. Orzag (1977) "Numerical Analysis of Spectral Methods : Theory and Applications", SIAM, Philadelphia, PA
• J. Hesthaven, S. Gottlieb and D. Gottlieb (2007) "Spectral methods for time-dependent problems", Cambridge UP, Cambridge, UK
• Steven A. Orszag (1969) Numerical Methods for the Simulation of Turbulence, Phys. Fluids Supp. II, 12, 250–257
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.7. Spectral Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
• Jie Shen, Tao Tang and Li-Lian Wang (2011) "Spectral Methods: Algorithms, Analysis and Applications" (Springer Series in Computational Mathematics, V. 41, Springer), ISBN 354071040X
• Lloyd N. Trefethen (2000) Spectral Methods in MATLAB. SIAM, Philadelphia, PA
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectral network
In mathematics and supersymmetric gauge theory, spectral networks are "networks of trajectories on Riemann surfaces obeying certain local rules. Spectral networks arise naturally in four-dimensional N = 2 theories coupled to surface defects, particularly the theories of class S."[1][2]
References
1. Gaiotto, Davide; Moore, Gregory W.; Neitzke, Andrew (2012). "Spectral networks". Annales Henri Poincaré. 14 (7): 1643. arXiv:1204.4824. Bibcode:2013AnHP...14.1643G. doi:10.1007/s00023-013-0239-7. S2CID 118723363.
2. http://hep.caltech.edu/ym35/presentations/Moore.pdf
|
Wikipedia
|
Spectral density
The power spectrum $S_{xx}(f)$ of a time series $x(t)$ describes the distribution of power into frequency components composing that signal.[1] According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of a certain signal or sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum.
When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating $x^{2}(t)$ over the time domain, as dictated by Parseval's theorem.[1]
The spectrum of a physical process $x(t)$ often contains essential information about the nature of $x$. For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field $E(t)$ as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived through its effect on the auditory receptors of the inner ear, each of which is sensitive to a particular frequency.
However this article concentrates on situations in which the time series is known (at least in a statistical sense) or directly measured (such as by a microphone sampled by a computer). The power spectrum is important in statistical signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics and engineering. Typically the process is a function of time, but one can similarly discuss data in the spatial domain being decomposed in terms of spatial frequency.[1]
Units
See also: Fourier transform § Units
In physics, the signal might be a wave, such as an electromagnetic wave, an acoustic wave, or the vibration of a mechanism. The power spectral density (PSD) of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is commonly expressed in watts per hertz (W/Hz).[2]
When a signal is defined in terms only of a voltage, for instance, there is no unique power associated with the stated amplitude. In this case "power" is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance. So one might use units of V2 Hz−1 for the PSD. Energy spectral density (ESD) would have units of V2 s Hz−1, since energy has units of power multiplied by time (e.g., watt-hour).[3]
In the general case, the units of PSD will be the ratio of units of variance per unit of frequency; so, for example, a series of displacement values (in meters) over time (in seconds) will have PSD in units of meters squared per hertz, m2/Hz. In the analysis of random vibrations, units of g2 Hz−1 are frequently used for the PSD of acceleration, where g denotes the g-force.[4]
Mathematically, it is not necessary to assign physical dimensions to the signal or to the independent variable. In the following discussion the meaning of x(t) will remain unspecified, but the independent variable will be assumed to be that of time.
Definition
Energy spectral density
Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing;[5] that is, the energy $E$ of a signal $x(t)$ is:
$E\triangleq \int _{-\infty }^{\infty }\left|x(t)\right|^{2}\ dt.$
The energy spectral density is most suitable for transients—that is, pulse-like signals—having a finite total energy. Finite or not, Parseval's theorem[6] (or Plancherel's theorem) gives us an alternate expression for the energy of the signal:
$\int _{-\infty }^{\infty }|x(t)|^{2}\,dt=\int _{-\infty }^{\infty }\left|{\hat {x}}(f)\right|^{2}\ df,$
where:
${\hat {x}}(f)\triangleq \int _{-\infty }^{\infty }e^{-i2\pi ft}x(t)\ dt$
is the value of the Fourier transform of $x(t)$ at frequency $f$ (in Hz). The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of$\left|{\hat {x}}(f)\right|^{2}df$ can be interpreted as a density function multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency $f$ in the frequency interval $f+df$.
Therefore, the energy spectral density of $x(t)$ is defined as:[6]
${\bar {S}}_{xx}(f)\triangleq \left|{\hat {x}}(f)\right|^{2}$
(Eq.1)
The function ${\bar {S}}_{xx}(f)$ and the autocorrelation of $x(t)$ form a Fourier transform pair, a result also known as the Wiener–Khinchin theorem (see also Periodogram).
As a physical example of how one might measure the energy spectral density of a signal, suppose $V(t)$ represents the potential (in volts) of an electrical pulse propagating along a transmission line of impedance $Z$, and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time $t$ is equal to $V(t)^{2}/Z$, so the total energy is found by integrating $V(t)^{2}/Z$ with respect to time over the duration of the pulse. To find the value of the energy spectral density ${\bar {S}}_{xx}(f)$ at frequency $f$, one could insert between the transmission line and the resistor a bandpass filter which passes only a narrow range of frequencies ($\Delta f$, say) near the frequency of interest and then measure the total energy $E(f)$ dissipated across the resistor. The value of the energy spectral density at $f$ is then estimated to be $E(f)/\Delta f$. In this example, since the power $V(t)^{2}/Z$ has units of V2 Ω−1, the energy $E(f)$ has units of V2 s Ω−1 = J, and hence the estimate $E(f)/\Delta f$ of the energy spectral density has units of J Hz−1, as required. In many situations, it is common to forget the step of dividing by $Z$ so that the energy spectral density instead has units of V2 Hz−2.
This definition generalizes in a straightforward manner to a discrete signal with a countably infinite number of values $x_{n}$ such as a signal sampled at discrete times $t_{n}=t_{0}+(n\,\Delta t)$:
${\bar {S}}_{xx}(f)=\lim _{N\to \infty }(\Delta t)^{2}\underbrace {\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}} _{\left|{\hat {x}}_{d}(f)\right|^{2}},$
where ${\hat {x}}_{d}(f)$ is the discrete-time Fourier transform of $x_{n}.$ The sampling interval $\Delta t$ is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit $\Delta t\to 0.$ But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see normalized frequency)
Power spectral density
The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the power spectral density (PSD) which exists for stationary processes; this describes how the power of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the variance of a function over time $x(t)$ (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the power spectrum even when there is no physical power involved. If one were to create a physical voltage source which followed $x(t)$ and applied it to the terminals of a one ohm resistor, then indeed the instantaneous power dissipated in that resistor would be given by $x(t)^{2}$ watts.
The average power $P$ of a signal $x(t)$ over all time is therefore given by the following time average, where the period $T$ is centered about some arbitrary time $t=t_{0}$:
$P=\lim _{T\to \infty }{\frac {1}{T}}\int _{t_{0}-T/2}^{t_{0}+T/2}|x(t)|^{2}\,dt$
However, for the sake of dealing with the math that follows, it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral. As such, we have an alternative representation of the average power, where $x_{T}(t)=x(t)w_{T}(t)$ and $w_{T}(t)$ is unity within the arbitrary period and zero elsewhere.
$P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|x_{T}(t)|^{2}\,dt$
Clearly in cases where the above expression for P is non-zero (even as T grows without bound) the integral itself must also grow without bound. That is the reason that we cannot use the energy spectral density itself, which is that diverging integral, in such cases.
In analyzing the frequency content of the signal $x(t)$, one might like to compute the ordinary Fourier transform ${\hat {x}}(f)$; however, for many signals of interest the Fourier transform does not formally exist.[N 1] Regardless, Parseval's theorem tells us that we can re-write the average power as follows.
$P=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|{\hat {x}}_{T}(f)|^{2}\,df$
Then the power spectral density is simply defined as the integrand above.[8][9]
$S_{xx}(f)=\lim _{T\to \infty }{\frac {1}{T}}|{\hat {x}}_{T}(f)|^{2}\,$
(Eq.2)
From here, we can also view $|{\hat {x}}_{T}(f)|^{2}$ as the Fourier transform of the time convolution of $x_{T}^{*}(-t)$ and $x_{T}(t)$
$\left|{\hat {x}}_{T}(f)\right|^{2}={\mathcal {F}}\left\{x_{T}^{*}(-t)\mathbin {\mathbf {*} } x_{T}(t)\right\}=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau $
Now, if we divide the time convolution above by the period $T$ and take the limit as $T\rightarrow \infty $, it becomes the autocorrelation function of the non-windowed signal $x(t)$, which is denoted as $R_{xx}(\tau )$, provided that $x(t)$ is ergodic, which is true in most, but not all, practical cases.[10].
$\lim _{T\to \infty }{\frac {1}{T}}\left|{\hat {x}}_{T}(f)\right|^{2}=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }\ d\tau =\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }d\tau $
From here we see, again assuming the ergodicity of $x(t)$, that the power spectral density can be found as the Fourier transform of the autocorrelation function (Wiener–Khinchin theorem).
$S_{xx}(f)=\int _{-\infty }^{\infty }R_{xx}(\tau )e^{-i2\pi f\tau }\,d\tau ={\hat {R}}_{xx}(f)$
(Eq.3)
Many authors use this equality to actually define the power spectral density.[11]
The power of the signal in a given frequency band $[f_{1},f_{2}]$, where $0<f_{1}<f_{2}$, can be calculated by integrating over frequency. Since $S_{xx}(-f)=S_{xx}(f)$, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used):
$P_{\textsf {bandlimited}}=2\int _{f_{1}}^{f_{2}}S_{xx}(f)\,df$
More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval $T$ is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than $1/T$ are not sampled, and results at frequencies which are not an integer multiple of $1/T$ are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to statistical ensembles of realizations of $x(t)$ evaluated over the specified time window.
Just as with the energy spectral density, the definition of the power spectral density can be generalized to discrete time variables $x_{n}$. As before, we can consider a window of $-N\leq n\leq N$ with the signal sampled at discrete times $x_{n}=x_{0}+(n\,\Delta t)$ for a total measurement period $T=(2N+1)\,\Delta t$.
$S_{xx}(f)=\lim _{N\to \infty }{\frac {(\Delta t)^{2}}{T}}\left|\sum _{n=-N}^{N}x_{n}e^{-i2\pi fn\,\Delta t}\right|^{2}$
Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when $N$ (and thus $T$) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a periodogram. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval $T$ approach infinity (Brown & Hwang).[12]
If two signals both possess power spectral densities, then the cross-spectral density can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the cross-correlation.
Properties of the power spectral density
Some properties of the PSD include:[13]
• The power spectrum is always real and non-negative, and the spectrum of a real valued process is also an even function of frequency: $S_{xx}(-f)=S_{xx}(f)$.
• For a continuous stochastic process x(t), the autocorrelation function Rxx(t) can be reconstructed from its power spectrum Sxx(f) by using the inverse Fourier transform
• Using Parseval's theorem, one can compute the variance (average power) of a process by integrating the power spectrum over all frequency:
$P=\operatorname {Var} (x)=\int _{-\infty }^{\infty }\!S_{xx}(f)\,df$
• For a real process x(t) with power spectral density $S_{xx}(f)$, one can compute the integrated spectrum or power spectral distribution $F(f)$, which specifies the average bandlimited power contained in frequencies from DC to f using:[14]
$F(f)=2\int _{0}^{f}S_{xx}(f')\,df'.$
Note that the previous expression for total power (signal variance) is a special case where ƒ → ∞.
Cross power spectral density
Given two signals $x(t)$ and $y(t)$, each of which possess power spectral densities $S_{xx}(f)$ and $S_{yy}(f)$, it is possible to define a cross power spectral density (CPSD) or cross spectral density (CSD). To begin, let us consider the average power of such a combined signal.
${\begin{aligned}P&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }\left[x_{T}(t)+y_{T}(t)\right]^{*}\left[x_{T}(t)+y_{T}(t)\right]dt\\&=\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }|x_{T}(t)|^{2}+x_{T}^{*}(t)y_{T}(t)+y_{T}^{*}(t)x_{T}(t)+|y_{T}(t)|^{2}dt\\\end{aligned}}$
Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain
${\begin{aligned}S_{xy}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {x}}_{T}^{*}(f){\hat {y}}_{T}(f)\right]&S_{yx}(f)&=\lim _{T\to \infty }{\frac {1}{T}}\left[{\hat {y}}_{T}^{*}(f){\hat {x}}_{T}(f)\right]\end{aligned}}$
where, again, the contributions of $S_{xx}(f)$ and $S_{yy}(f)$ are already understood. Note that $S_{xy}^{*}(f)=S_{yx}(f)$, so the full contribution to the cross power is, generally, from twice the real part of either individual CPSD. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit $T\to \infty $ becomes the Fourier transform of a cross-correlation function.[15]
${\begin{aligned}S_{xy}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }x_{T}^{*}(t-\tau )y_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{xy}(\tau )e^{-i2\pi f\tau }d\tau \\S_{yx}(f)&=\int _{-\infty }^{\infty }\left[\lim _{T\to \infty }{\frac {1}{T}}\int _{-\infty }^{\infty }y_{T}^{*}(t-\tau )x_{T}(t)dt\right]e^{-i2\pi f\tau }d\tau =\int _{-\infty }^{\infty }R_{yx}(\tau )e^{-i2\pi f\tau }d\tau \end{aligned}}$
where $R_{xy}(\tau )$ is the cross-correlation of $x(t)$ with $y(t)$ and $R_{yx}(\tau )$ is the cross-correlation of $y(t)$ with $x(t)$. In light of this, the PSD is seen to be a special case of the CSD for $x(t)=y(t)$. If $x(t)$ and $y(t)$ are real signals (e.g. voltage or current), their Fourier transforms ${\hat {x}}(f)$ and ${\hat {y}}(f)$ are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full CPSD is just one of the CPSDs scaled by a factor of two.
$\operatorname {CPSD} _{\text{Full}}=2S_{xy}(f)=2S_{yx}(f)$
For discrete signals xn and yn, the relationship between the cross-spectral density and the cross-covariance is
$S_{xy}(f)=\sum _{n=-\infty }^{\infty }R_{xy}(\tau _{n})e^{-i2\pi f\tau _{n}}\,\Delta \tau $
Estimation
The goal of spectral density estimation is to estimate the spectral density of a random signal from a sequence of time samples. Depending on what is known about the signal, estimation techniques can involve parametric or non-parametric approaches, and may be based on time-domain or frequency-domain analysis. For example, a common parametric technique involves fitting the observations to an autoregressive model. A common non-parametric technique is the periodogram.
The spectral density is usually estimated using Fourier transform methods (such as the Welch method), but other techniques such as the maximum entropy method can also be used.
Related concepts
• The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts.
• The spectral edge frequency (SEF), usually expressed as "SEF x", represents the frequency below which x percent of the total power of a given signal are located; typically, x is in the range 75 to 95. It is more particularly a popular measure used in EEG monitoring, in which case SEF has variously been used to estimate the depth of anesthesia and stages of sleep.[16][17][18]
• A spectral envelope is the envelope curve of the spectrum density. It describes one point in time (one window, to be precise). For example, in remote sensing using a spectrometer, the spectral envelope of a feature is the boundary of its spectral properties, as defined by the range of brightness levels in each of the spectral bands of interest.[19]
• The spectral density is a function of frequency, not a function of time. However, the spectral density of a small window of a longer signal may be calculated, and plotted versus time associated with the window. Such a graph is called a spectrogram. This is the basis of a number of spectral analysis techniques such as the short-time Fourier transform and wavelets.
• A "spectrum" generally means the power spectral density, as discussed above, which depicts the distribution of signal content over frequency. For transfer functions (e.g., Bode plot, chirp) the complete frequency response may be graphed in two parts: power versus frequency and phase versus frequency—the phase spectral density, phase spectrum, or spectral phase. Less commonly, the two parts may be the real and imaginary parts of the transfer function. This is not to be confused with the frequency response of a transfer function, which also includes a phase (or equivalently, a real and imaginary part) as a function of frequency. The time-domain impulse response $h(t)$ cannot generally be uniquely recovered from the power spectral density alone without the phase part. Although these are also Fourier transform pairs, there is no symmetry (as there is for the autocorrelation) forcing the Fourier transform to be real-valued. See Ultrashort pulse#Spectral phase, phase noise, group delay.
• Sometimes one encounters an amplitude spectral density (ASD), which is the square root of the PSD; the ASD of a voltage signal has units of V Hz−1/2.[20] This is useful when the shape of the spectrum is rather constant, since variations in the ASD will then be proportional to variations in the signal's voltage level itself. But it is mathematically preferred to use the PSD, since only in that case is the area under the curve meaningful in terms of actual power over all frequency or over a specified bandwidth.
Applications
Any signal that can be represented as a variable that varies in time has a corresponding frequency spectrum. This includes familiar entities such as visible light (perceived as color), musical notes (perceived as pitch), radio/TV (specified by their frequency, or sometimes wavelength) and even the regular rotation of the earth. When these signals are viewed in the form of a frequency spectrum, certain aspects of the received signals or the underlying processes producing them are revealed. In some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be peaks corresponding to harmonics of a fundamental peak, indicating a periodic signal which is not simply sinusoidal. Or a continuous spectrum may show narrow frequency intervals which are strongly enhanced corresponding to resonances, or frequency intervals containing almost zero power as would be produced by a notch filter.
Electrical engineering
The concept and use of the power spectrum of a signal is fundamental in electrical engineering, especially in electronic communication systems, including radio communications, radars, and related systems, plus passive remote sensing technology. Electronic instruments called spectrum analyzers are used to observe and measure the power spectra of signals.
The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density.
Cosmology
Primordial fluctuations, density variations in the early universe, are quantified by a power spectrum which gives the power of the variations as a function of spatial scale.
Climate Science
Power spectral-analysis have been used to examine the spatial structures for climate research.[21] These results suggests atmospheric turbulence link climate change to more local regional volatility in weather conditions.[22]
See also
• Bispectrum
• Brightness temperature
• Colors of noise
• Least-squares spectral analysis
• Noise spectral density
• Spectral density estimation
• Spectral efficiency
• Spectral leakage
• Spectral power distribution
• Whittle likelihood
• Window function
Notes
1. Some authors (e.g. Risken[7]) still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density
$\langle {\hat {x}}(\omega ){\hat {x}}^{\ast }(\omega ')\rangle =2\pi f(\omega )\delta (\omega -\omega '),$
where $\delta (\omega -\omega ')$ is the Dirac delta function. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.
References
1. P Stoica & R Moses (2005). "Spectral Analysis of Signals" (PDF).
2. Gérard Maral (2003). VSAT Networks. John Wiley and Sons. ISBN 978-0-470-86684-9.
3. Michael Peter Norton & Denis G. Karczub (2003). Fundamentals of Noise and Vibration Analysis for Engineers. Cambridge University Press. ISBN 978-0-521-49913-2.
4. Alessandro Birolini (2007). Reliability Engineering. Springer. p. 83. ISBN 978-3-540-49388-4.
5. Oppenheim; Verghese. Signals, Systems, and Inference. pp. 32–4.
6. Stein, Jonathan Y. (2000). Digital Signal Processing: A Computer Science Perspective. Wiley. p. 115.
7. Hannes Risken (1996). The Fokker–Planck Equation: Methods of Solution and Applications (2nd ed.). Springer. p. 30. ISBN 9783540615309.
8. Fred Rieke; William Bialek & David Warland (1999). Spikes: Exploring the Neural Code (Computational Neuroscience). MIT Press. ISBN 978-0262681087.
9. Scott Millers & Donald Childers (2012). Probability and random processes. Academic Press. pp. 370–5.
10. The Wiener–Khinchin theorem makes sense of this formula for any wide-sense stationary process under weaker hypotheses: $R_{xx}$ does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving distributions (in the sense of Laurent Schwartz, not in the sense of a statistical Cumulative distribution function) instead of functions. If $R_{xx}$ is continuous, Bochner's theorem can be used to prove that its Fourier transform exists as a positive measure, whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).
11. Dennis Ward Ricker (2003). Echo Signal Processing. Springer. ISBN 978-1-4020-7395-3.
12. Robert Grover Brown & Patrick Y.C. Hwang (1997). Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons. ISBN 978-0-471-12839-7.
13. Von Storch, H.; Zwiers, F. W. (2001). Statistical analysis in climate research. Cambridge University Press. ISBN 978-0-521-01230-0.
14. An Introduction to the Theory of Random Signals and Noise, Wilbur B. Davenport and Willian L. Root, IEEE Press, New York, 1987, ISBN 0-87942-235-1
15. William D Penny (2009). "Signal Processing Course, chapter 7".
16. Iranmanesh, Saam; Rodriguez-Villegas, Esther (2017). "An Ultralow-Power Sleep Spindle Detection System on Chip". IEEE Transactions on Biomedical Circuits and Systems. 11 (4): 858–866. doi:10.1109/TBCAS.2017.2690908. hdl:10044/1/46059. PMID 28541914. S2CID 206608057.
17. Imtiaz, Syed Anas; Rodriguez-Villegas, Esther (2014). "A Low Computational Cost Algorithm for REM Sleep Detection Using Single Channel EEG". Annals of Biomedical Engineering. 42 (11): 2344–59. doi:10.1007/s10439-014-1085-6. PMC 4204008. PMID 25113231.
18. Drummond JC, Brann CA, Perkins DE, Wolfe DE: "A comparison of median frequency, spectral edge frequency, a frequency band power ratio, total power, and dominance shift in the determination of depth of anesthesia," Acta Anaesthesiol. Scand. 1991 Nov;35(8):693-9.
19. Swartz, Diemo (1998). "Spectral Envelopes". .
20. Michael Cerna & Audrey F. Harvey (2000). "The Fundamentals of FFT-Based Signal Analysis and Measurement" (PDF).
21. Communication, N. B. I. (2022-05-23). "Danish astrophysics student discovers link between global warming and locally unstable weather". nbi.ku.dk. Retrieved 2022-07-23.
22. Sneppen, Albert (2022-05-05). "The power spectrum of climate change". The European Physical Journal Plus. 137 (5): 555. arXiv:2205.07908. Bibcode:2022EPJP..137..555S. doi:10.1140/epjp/s13360-022-02773-w. ISSN 2190-5444. S2CID 248652864.
External links
• Power Spectral Density Matlab scripts
Decibel suffixes (dB)
• dBm (or dBmW)
• dBW
• dBV
• dBm/Hz
• PSD
• dBA
• dBZ (radar)
• dBsm
• dBc
• dBi
• dBFS
• dBrn
• dB-Hz
See also
logarithmic unit
link budget
signal noise
telecommunication
|
Wikipedia
|
Spectral radius
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues.[1] More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·).
Definition
Matrices
Let λ1, ..., λn be the eigenvalues of a matrix A ∈ Cn×n. The spectral radius of A is defined as
$\rho (A)=\max \left\{|\lambda _{1}|,\dotsc ,|\lambda _{n}|\right\}.$
The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand, $\rho (A)\leqslant \|A\|$ for every natural matrix norm $\|\cdot \|$; and on the other hand, Gelfand's formula states that $\rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}$. Both of these results are shown below.
However, the spectral radius does not necessarily satisfy $\|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|$ for arbitrary vectors $\mathbf {v} \in \mathbb {C} ^{n}$. To see why, let $r>1$ be arbitrary and consider the matrix
$C_{r}={\begin{pmatrix}0&r^{-1}\\r&0\end{pmatrix}}$.
The characteristic polynomial of $C_{r}$ is $\lambda ^{2}-1$, so its eigenvalues are $\{-1,1\}$ and thus $\rho (C_{r})=1$. However, $C_{r}\mathbf {e} _{1}=r\mathbf {e} _{2}$. As a result,
$\|C_{r}\mathbf {e} _{1}\|=r>1=\rho (C_{r})\|\mathbf {e} _{1}\|.$
As an illustration of Gelfand's formula, note that $\|C_{r}^{k}\|^{1/k}\to 1$ as $k\to \infty $, since $C_{r}^{k}=I$ if $k$ is even and $C_{r}^{k}=C_{r}$ if $k$ is odd.
A special case in which $\|A\mathbf {v} \|\leqslant \rho (A)\|\mathbf {v} \|$ for all $\mathbf {v} \in \mathbb {C} ^{n}$ is when $A$ is a Hermitian matrix and $\|\cdot \|$ is the Euclidean norm. This is because any Hermitian Matrix is diagonalizable by a unitary matrix, and unitary matrices preserve vector length. As a result,
$\|A\mathbf {v} \|=\|U^{*}DU\mathbf {v} \|=\|DU\mathbf {v} \|\leqslant \rho (A)\|U\mathbf {v} \|=\rho (A)\|\mathbf {v} \|.$
Bounded linear operators
In the context of a bounded linear operator A on a Banach space, the eigenvalues need to be replaced with the elements of the spectrum of the operator, i.e. the values $\lambda $ for which $A-\lambda I$ is not bijective. We denote the spectrum by
$\sigma (A)=\left\{\lambda \in \mathbb {C} :A-\lambda I\;{\text{is not bijective}}\right\}$
The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum:
$\rho (A)=\sup _{\lambda \in \sigma (A)}|\lambda |$
Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting $\|\cdot \|$ denote the operator norm, we have
$\rho (A)=\lim _{k\to \infty }\|A^{k}\|^{\frac {1}{k}}=\inf _{k\in \mathbb {N} ^{*}}\|A^{k}\|^{\frac {1}{k}}.$
A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator.
Graphs
The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix.
This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C). In this case, for the graph G define:
$\ell ^{2}(G)=\left\{f:V(G)\to \mathbf {R} \ :\ \sum \nolimits _{v\in V(G)}\left\|f(v)^{2}\right\|<\infty \right\}.$ :\ \sum \nolimits _{v\in V(G)}\left\|f(v)^{2}\right\|<\infty \right\}.}
Let γ be the adjacency operator of G:
${\begin{cases}\gamma :\ell ^{2}(G)\to \ell ^{2}(G)\\(\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)\end{cases}}$ :\ell ^{2}(G)\to \ell ^{2}(G)\\(\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)\end{cases}}}
The spectral radius of G is defined to be the spectral radius of the bounded linear operator γ.
Upper bounds
Upper bounds on the spectral radius of a matrix
The following proposition gives simple yet useful upper bounds on the spectral radius of a matrix.
Proposition. Let A ∈ Cn×n with spectral radius ρ(A) and a consistent matrix norm ||⋅||. Then for each integer $k\geqslant 1$:
$\rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.$
Proof
Let (v, λ) be an eigenvector-eigenvalue pair for a matrix A. By the sub-multiplicativity of the matrix norm, we get:
$|\lambda |^{k}\|\mathbf {v} \|=\|\lambda ^{k}\mathbf {v} \|=\|A^{k}\mathbf {v} \|\leq \|A^{k}\|\cdot \|\mathbf {v} \|.$
Since v ≠ 0, we have
$|\lambda |^{k}\leq \|A^{k}\|$
and therefore
$\rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.$
concluding the proof.
Upper bounds for spectral radius of a graph
There are many upper bounds for the spectral radius of a graph in terms of its number n of vertices and its number m of edges. For instance, if
${\frac {(k-2)(k-3)}{2}}\leq m-n\leq {\frac {k(k-3)}{2}}$
where $3\leq k\leq n$ is an integer, then[2]
$\rho (G)\leq {\sqrt {2m-n-k+{\frac {5}{2}}+{\sqrt {2m-2n+{\frac {9}{4}}}}}}$
Power sequence
The spectral radius is closely related to the behavior of the convergence of the power sequence of a matrix; namely as shown by the following theorem.
Theorem. Let A ∈ Cn×n with spectral radius ρ(A). Then ρ(A) < 1 if and only if
$\lim _{k\to \infty }A^{k}=0.$
On the other hand, if ρ(A) > 1, $\lim _{k\to \infty }\|A^{k}\|=\infty $. The statement holds for any choice of matrix norm on Cn×n.
Proof
Assume that $A^{k}$ goes to zero as $k$ goes to infinity. We will show that ρ(A) < 1. Let (v, λ) be an eigenvector-eigenvalue pair for A. Since Akv = λkv, we have
${\begin{aligned}0&=\left(\lim _{k\to \infty }A^{k}\right)\mathbf {v} \\&=\lim _{k\to \infty }\left(A^{k}\mathbf {v} \right)\\&=\lim _{k\to \infty }\lambda ^{k}\mathbf {v} \\&=\mathbf {v} \lim _{k\to \infty }\lambda ^{k}\end{aligned}}$
Since v ≠ 0 by hypothesis, we must have
$\lim _{k\to \infty }\lambda ^{k}=0,$
which implies |λ| < 1. Since this must be true for any eigenvalue λ, we can conclude that ρ(A) < 1.
Now, assume the radius of A is less than 1. From the Jordan normal form theorem, we know that for all A ∈ Cn×n, there exist V, J ∈ Cn×n with V non-singular and J block diagonal such that:
$A=VJV^{-1}$
with
$J={\begin{bmatrix}J_{m_{1}}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}(\lambda _{s})\end{bmatrix}}$
where
$J_{m_{i}}(\lambda _{i})={\begin{bmatrix}\lambda _{i}&1&0&\cdots &0\\0&\lambda _{i}&1&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}&1\\0&0&\cdots &0&\lambda _{i}\end{bmatrix}}\in \mathbf {C} ^{m_{i}\times m_{i}},1\leq i\leq s.$
It is easy to see that
$A^{k}=VJ^{k}V^{-1}$
and, since J is block-diagonal,
$J^{k}={\begin{bmatrix}J_{m_{1}}^{k}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}^{k}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}^{k}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}^{k}(\lambda _{s})\end{bmatrix}}$
Now, a standard result on the k-power of an $m_{i}\times m_{i}$ Jordan block states that, for $k\geq m_{i}-1$:
$J_{m_{i}}^{k}(\lambda _{i})={\begin{bmatrix}\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&{k \choose 2}\lambda _{i}^{k-2}&\cdots &{k \choose m_{i}-1}\lambda _{i}^{k-m_{i}+1}\\0&\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&\cdots &{k \choose m_{i}-2}\lambda _{i}^{k-m_{i}+2}\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}\\0&0&\cdots &0&\lambda _{i}^{k}\end{bmatrix}}$
Thus, if $\rho (A)<1$ then for all i $|\lambda _{i}|<1$. Hence for all i we have:
$\lim _{k\to \infty }J_{m_{i}}^{k}=0$
which implies
$\lim _{k\to \infty }J^{k}=0.$
Therefore,
$\lim _{k\to \infty }A^{k}=\lim _{k\to \infty }VJ^{k}V^{-1}=V\left(\lim _{k\to \infty }J^{k}\right)V^{-1}=0$
On the other side, if $\rho (A)>1$, there is at least one element in J that does not remain bounded as k increases, thereby proving the second part of the statement.
Gelfand's formula
Gelfand's formula, named after Israel Gelfand, gives the spectral radius as a limit of matrix norms.
Theorem
For any matrix norm ||⋅||, we have[3]
$\rho (A)=\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}$.
Moreover, in the case of a consistent matrix norm $\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}$ approaches $\rho (A)$ from above (indeed, in that case $\rho (A)\leq \left\|A^{k}\right\|^{\frac {1}{k}}$ for all $k$).
Proof
For any ε > 0, let us define the two following matrices:
$A_{\pm }={\frac {1}{\rho (A)\pm \varepsilon }}A.$
Thus,
$\rho \left(A_{\pm }\right)={\frac {\rho (A)}{\rho (A)\pm \varepsilon }},\qquad \rho (A_{+})<1<\rho (A_{-}).$
We start by applying the previous theorem on limits of power sequences to A+:
$\lim _{k\to \infty }A_{+}^{k}=0.$
This shows the existence of N+ ∈ N such that, for all k ≥ N+,
$\left\|A_{+}^{k}\right\|<1.$
Therefore,
$\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon .$
Similarly, the theorem on power sequences implies that $\|A_{-}^{k}\|$ is not bounded and that there exists N− ∈ N such that, for all k ≥ N−,
$\left\|A_{-}^{k}\right\|>1.$
Therefore,
$\left\|A^{k}\right\|^{\frac {1}{k}}>\rho (A)-\varepsilon .$
Let N = max{N+, N−}. Then,
$\forall \varepsilon >0\quad \exists N\in \mathbf {N} \quad \forall k\geq N\quad \rho (A)-\varepsilon <\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon ,$
that is,
$\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A).$
This concludes the proof.
Corollary
Gelfand's formula yields a bound on the spectral radius of a product of commuting matrices: if $A_{1},\ldots ,A_{n}$ are matrices that all commute, then
$\rho (A_{1}\cdots A_{n})\leq \rho (A_{1})\cdots \rho (A_{n}).$
Numerical example
Consider the matrix
$A={\begin{bmatrix}9&-1&2\\-2&8&4\\1&1&8\end{bmatrix}}$
whose eigenvalues are 5, 10, 10; by definition, ρ(A) = 10. In the following table, the values of $\|A^{k}\|^{\frac {1}{k}}$ for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,$\|.\|_{1}=\|.\|_{\infty }$):
k $\|\cdot \|_{1}=\|\cdot \|_{\infty }$ $\|\cdot \|_{F}$ $\|\cdot \|_{2}$
1 14 15.362291496 10.681145748
2 12.649110641 12.328294348 10.595665162
3 11.934831919 11.532450664 10.500980846
4 11.501633169 11.151002986 10.418165779
5 11.216043151 10.921242235 10.351918183
$\vdots $ $\vdots $ $\vdots $ $\vdots $
10 10.604944422 10.455910430 10.183690042
11 10.548677680 10.413702213 10.166990229
12 10.501921835 10.378620930 10.153031596
$\vdots $ $\vdots $ $\vdots $ $\vdots $
20 10.298254399 10.225504447 10.091577411
30 10.197860892 10.149776921 10.060958900
40 10.148031640 10.112123681 10.045684426
50 10.118251035 10.089598820 10.036530875
$\vdots $ $\vdots $ $\vdots $ $\vdots $
100 10.058951752 10.044699508 10.018248786
200 10.029432562 10.022324834 10.009120234
300 10.019612095 10.014877690 10.006079232
400 10.014705469 10.011156194 10.004559078
$\vdots $ $\vdots $ $\vdots $ $\vdots $
1000 10.005879594 10.004460985 10.001823382
2000 10.002939365 10.002230244 10.000911649
3000 10.001959481 10.001486774 10.000607757
$\vdots $ $\vdots $ $\vdots $ $\vdots $
10000 10.000587804 10.000446009 10.000182323
20000 10.000293898 10.000223002 10.000091161
30000 10.000195931 10.000148667 10.000060774
$\vdots $ $\vdots $ $\vdots $ $\vdots $
100000 10.000058779 10.000044600 10.000018232
Notes and references
1. Gradshteĭn, I. S. (1980). Table of integrals, series, and products. I. M. Ryzhik, Alan Jeffrey (Corr. and enl. ed.). New York: Academic Press. ISBN 0-12-294760-6. OCLC 5892996.
2. Guo, Ji-Ming; Wang, Zhi-Wen; Li, Xin (2019). "Sharp upper bounds of the spectral radius of a graph". Discrete Mathematics. 342 (9): 2559–2563. doi:10.1016/j.disc.2019.05.017. S2CID 198169497.
3. The formula holds for any Banach algebra; see Lemma IX.1.8 in Dunford & Schwartz 1963 and Lax 2002, pp. 195–197
Bibliography
• Dunford, Nelson; Schwartz, Jacob (1963), Linear operators II. Spectral Theory: Self Adjoint Operators in Hilbert Space, Interscience Publishers, Inc.
• Lax, Peter D. (2002), Functional Analysis, Wiley-Interscience, ISBN 0-471-55604-1
See also
• Spectral gap
• The Joint spectral radius is a generalization of the spectral radius to sets of matrices.
• Spectrum of a matrix
• Spectral abscissa
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectral set
In operator theory, a set $X\subseteq \mathbb {C} $ is said to be a spectral set for a (possibly unbounded) linear operator $T$ on a Banach space if the spectrum of $T$ is in $X$ and von-Neumann's inequality holds for $T$ on $X$ - i.e. for all rational functions $r(x)$ with no poles on $X$
$\left\Vert r(T)\right\Vert \leq \left\Vert r\right\Vert _{X}=\sup \left\{\left\vert r(x)\right\vert :x\in X\right\}$
This concept is related to the topic of analytic functional calculus of operators. In general, one wants to get more details about the operators constructed from functions with the original operator as the variable.
For a detailed discussion of spectral sets and von Neumann's inequality, see.[1]
1. Badea, Catalin; Beckermann, Bernhard (2013-02-03). "Spectral Sets". arXiv:1302.0546 [math.FA].
|
Wikipedia
|
Spectral shape analysis
Spectral shape analysis relies on the spectrum (eigenvalues and/or eigenfunctions) of the Laplace–Beltrami operator to compare and analyze geometric shapes. Since the spectrum of the Laplace–Beltrami operator is invariant under isometries, it is well suited for the analysis or retrieval of non-rigid shapes, i.e. bendable objects such as humans, animals, plants, etc.
Laplace
The Laplace–Beltrami operator is involved in many important differential equations, such as the heat equation and the wave equation. It can be defined on a Riemannian manifold as the divergence of the gradient of a real-valued function f:
$\Delta f:=\operatorname {div} \operatorname {grad} f.$
Its spectral components can be computed by solving the Helmholtz equation (or Laplacian eigenvalue problem):
$\Delta \varphi _{i}+\lambda _{i}\varphi _{i}=0.$
The solutions are the eigenfunctions $\varphi _{i}$ (modes) and corresponding eigenvalues $\lambda _{i}$, representing a diverging sequence of positive real numbers. The first eigenvalue is zero for closed domains or when using the Neumann boundary condition. For some shapes, the spectrum can be computed analytically (e.g. rectangle, flat torus, cylinder, disk or sphere). For the sphere, for example, the eigenfunctions are the spherical harmonics.
The most important properties of the eigenvalues and eigenfunctions are that they are isometry invariants. In other words, if the shape is not stretched (e.g. a sheet of paper bent into the third dimension), the spectral values will not change. Bendable objects, like animals, plants and humans, can move into different body postures with only minimal stretching at the joints. The resulting shapes are called near-isometric and can be compared using spectral shape analysis.
Discretizations
Geometric shapes are often represented as 2D curved surfaces, 2D surface meshes (usually triangle meshes) or 3D solid objects (e.g. using voxels or tetrahedra meshes). The Helmholtz equation can be solved for all these cases. If a boundary exists, e.g. a square, or the volume of any 3D geometric shape, boundary conditions need to be specified.
Several discretizations of the Laplace operator exist (see Discrete Laplace operator) for the different types of geometry representations. Many of these operators do not approximate well the underlying continuous operator.
Spectral shape descriptors
ShapeDNA and its variants
The ShapeDNA is one of the first spectral shape descriptors. It is the normalized beginning sequence of the eigenvalues of the Laplace–Beltrami operator.[1][2] Its main advantages are the simple representation (a vector of numbers) and comparison, scale invariance, and in spite of its simplicity a very good performance for shape retrieval of non-rigid shapes.[3] Competitors of shapeDNA include singular values of Geodesic Distance Matrix (SD-GDM) [4] and Reduced BiHarmonic Distance Matrix (R-BiHDM).[5] However, the eigenvalues are global descriptors, therefore the shapeDNA and other global spectral descriptors cannot be used for local or partial shape analysis.
Global point signature (GPS)
The global point signature[6] at a point $x$ is a vector of scaled eigenfunctions of the Laplace–Beltrami operator computed at $x$ (i.e. the spectral embedding of the shape). The GPS is a global feature in the sense that it cannot be used for partial shape matching.
Heat kernel signature (HKS)
The heat kernel signature[7] makes use of the eigen-decomposition of the heat kernel:
$h_{t}(x,y)=\sum _{i=0}^{\infty }\exp(-\lambda _{i}t)\varphi _{i}(x)\varphi _{i}(y).$
For each point on the surface the diagonal of the heat kernel $h_{t}(x,x)$ is sampled at specific time values $t_{j}$ and yields a local signature that can also be used for partial matching or symmetry detection.
Wave kernel signature (WKS)
The WKS[8] follows a similar idea to the HKS, replacing the heat equation with the Schrödinger wave equation.
Improved wave kernel signature (IWKS)
The IWKS[9] improves the WKS for non-rigid shape retrieval by introducing a new scaling function to the eigenvalues and aggregating a new curvature term.
Spectral graph wavelet signature (SGWS)
SGWS is a local descriptor that is not only isometric invariant, but also compact, easy to compute and combines the advantages of both band-pass and low-pass filters. An important facet of SGWS is the ability to combine the advantages of WKS and HKS into a single signature, while allowing a multiresolution representation of shapes.[10]
Spectral Matching
The spectral decomposition of the graph Laplacian associated with complex shapes (see Discrete Laplace operator) provides eigenfunctions (modes) which are invariant to isometries. Each vertex on the shape could be uniquely represented with a combinations of the eigenmodal values at each point, sometimes called spectral coordinates:
$s(x)=(\varphi _{1}(x),\varphi _{2}(x),\ldots ,\varphi _{N}(x)){\text{ for vertex }}x.$
Spectral matching consists of establishing the point correspondences by pairing vertices on different shapes that have the most similar spectral coordinates. Early work [11][12][13] focused on sparse correspondences for stereoscopy. Computational efficiency now enables dense correspondences on full meshes, for instance between cortical surfaces.[14] Spectral matching could also be used for complex non-rigid image registration, which is notably difficult when images have very large deformations.[15] Such image registration methods based on spectral eigenmodal values indeed capture global shape characteristics, and contrast with conventional non-rigid image registration methods which are often based on local shape characteristics (e.g., image gradients).
References
1. Reuter, M.; Wolter, F.-E.; Peinecke, N. (2005). "Laplace-Spectra as Fingerprints for Shape Matching". Proceedings of the 2005 ACM Symposium on Solid and Physical Modeling. pp. 101–106. doi:10.1145/1060244.1060256.
2. Reuter, M.; Wolter, F.-E.; Peinecke, N. (2006). "Laplace–Beltrami spectra as Shape-DNA of surfaces and solids". Computer-Aided Design. 38 (4): 342–366. doi:10.1016/j.cad.2005.10.011. S2CID 7566792.
3. Lian, Z.; et al. (2011). "SHREC'11 track: shape retrieval on non-rigid 3D watertight meshes". Proceedings of the Eurographics 2011 Workshop on 3D Object Retrieval (3DOR'11). pp. 79–88. doi:10.2312/3DOR/3DOR11/079-088.
4. Smeets, Dirk; Fabry, Thomas; Hermans, Jeroen; Vandermeulen, Dirk; Suetens, Paul (2009). "Isometric deformation modelling for object recognition". Computer Analysis of Images and Patterns. Lecture Notes in Computer Science. Vol. 5702. pp. 757–765. Bibcode:2009LNCS.5702..757S. doi:10.1007/978-3-642-03767-2_92. ISBN 978-3-642-03766-5.
5. Ye, J.; Yu, Y. (2015). "A fast modal space transform for robust nonrigid shape retrieval". The Visual Computer. 32 (5): 553–568. doi:10.1007/s00371-015-1071-5. hdl:10722/215522. S2CID 16707677.
6. Rustamov, R.M. (July 4, 2007). "Laplace–Beltrami eigenfunctions for deformation invariant shape representation". Proceedings of the fifth Eurographics symposium on Geometry processing. Eurographics Association. pp. 225–233. ISBN 978-3-905673-46-3.
7. Sun, J.; Ovsjanikov, M.; Guibas, L. (2009). "A Concise and Provably Informative Multi-Scale Signature-Based on Heat Diffusion". Computer Graphics Forum. Vol. 28. pp. 1383–92. doi:10.1111/j.1467-8659.2009.01515.x.
8. Aubry, M.; Schlickewei, U.; Cremers, D. (2011). "The wave kernel signature: A quantum mechanical approach to shape analysis". Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on. pp. 1626–1633. doi:10.1109/ICCVW.2011.6130444.
9. Limberger, F. A. & Wilson, R. C. (2015). "Feature Encoding of Spectral Signatures for 3D Non-Rigid Shape Retrieval". Proceedings of the British Machine Vision Conference (BMVC). pp. 56.1–56.13. doi:10.5244/C.29.56. ISBN 978-1-901725-53-7.
10. Masoumi, Majid; Li, Chunyuan; Ben Hamza, A (2016). "A spectral graph wavelet approach for nonrigid 3D shape retrieval". Pattern Recognition Letters. 83: 339–48. Bibcode:2016PaReL..83..339M. doi:10.1016/j.patrec.2016.04.009.
11. Umeyama, S (1988). "An eigendecomposition approach to weighted graph matching problems". IEEE Transactions on Pattern Analysis and Machine Intelligence. 10 (5): 695–703. doi:10.1109/34.6778.
12. Scott, GL; Longuet-Higgins, HC (1991). "An algorithm for associating the features of two images". Proceedings of the Royal Society of London. Series B: Biological Sciences. 244 (1309): 21–26. Bibcode:1991RSPSB.244...21S. doi:10.1098/rspb.1991.0045. PMID 1677192. S2CID 13011932.
13. Shapiro, LS; Brady, JM (1992). "Feature-based correspondence: an eigenvector approach". Image and Vision Computing. 10 (5): 283–8. doi:10.1016/0262-8856(92)90043-3.
14. Lombaert, H; Grady, L; Polimeni, JR; Cheriet, F (2013). "FOCUSR: Feature Oriented Correspondence using Spectral Regularization - A Method for Precise Surface Matching". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (9): 2143–2160. doi:10.1109/tpami.2012.276. PMC 3707975. PMID 23868776.
15. Lombaert, H; Grady, L; Pennec, X; Ayache, N; Cheriet, F (2014). "Spectral Log-Demons - Diffeomorphic Image Registration with Very Large Deformations". International Journal of Computer Vision. 107 (3): 254–271. CiteSeerX 10.1.1.649.9395. doi:10.1007/s11263-013-0681-5. S2CID 3347129.
|
Wikipedia
|
Spectral signal-to-noise ratio
In scientific imaging, the two-dimensional spectral signal-to-noise ratio (SSNR) is a signal-to-noise ratio measure which measures the normalised cross-correlation coefficient between several two-dimensional images over corresponding rings in Fourier space as a function of spatial frequency (Unser, Trus & Steven 1987). It is a multi-particle extension of the Fourier ring correlation (FRC), which is related to the Fourier shell correlation. The SSNR is a popular method for finding the resolution of a class average in cryo-electron microscopy.
Calculation
$SSNR(r)={\frac \sum _{r_{i}\in R}\left|\sum _{k_{i}}{F_{r_{i},k}}\right|^{2}}{\frac {K}{K-1}}\sum _{r_{i}\in R}\sum _{k_{i}}{\left|{F_{r_{i},k}-{\bar {F}}_{r_{i}}}\right|^{2}}}}-1$
where $F_{r_{i},k}$ is the complex structure factor for image k for a pixel $r_{i}$ at radius $R$. It is possible convert the SSNR into an equivalent FRC using the following formula:
$FRC={\frac {SSNR}{SSNR+1}}$
See also
• Resolution (electron density)
• Fourier shell correlation
Notes
References
• Unser, M.; Trus, B.L.; Steven, A.C. (1987). "A New Resolution Criterion Based on Spectral Signal-To-Noise Ratios". Ultramicroscopy. 23 (1): 39–51. doi:10.1016/0304-3991(87)90225-7. PMID 3660491.
• Harauz, G.; M. van Heel (1986). "Exact filters for general geometry three dimensional reconstruction". Optik. 73: 146–156.
• van Heel, M. (1982). "Detection of objects in quantum-noise limited images". Ultramicroscopy. 8 (4): 331–342. doi:10.1016/0304-3991(82)90258-3.
• Saxton, W.O.; W. Baumeister (1982). "The correlation averaging of a regularly arranged bacterial cell envelope protein". Journal of Microscopy. 127 (2): 127–138. doi:10.1111/j.1365-2818.1982.tb00405.x. PMID 7120365. S2CID 27206060.
• Böttcher, B.; Wynne, S.A.; Crowther, R.A. (1997). "Determination of the fold of the core protein of hepatitis B virus by electron microscopy". Nature. 386 (6620): 88–91. doi:10.1038/386088a0. PMID 9052786. S2CID 275192.
• Frank, Joachim (2006). Three-Dimnsional Electron Microscopy of Macromolecular Assemblies. New York: Oxford University Press. ISBN 0-19-518218-9.
• van Heel, M.; Schatz, M. (2005). "Fourier shell correlation threshold criteria". Journal of Structural Biology. 151 (3): 250–262. doi:10.1016/j.jsb.2005.05.009. PMID 16125414.
|
Wikipedia
|
Spectral space
In mathematics, a spectral space is a topological space that is homeomorphic to the spectrum of a commutative ring. It is sometimes also called a coherent space because of the connection to coherent topos.
Definition
Let X be a topological space and let K$\circ $(X) be the set of all compact open subsets of X. Then X is said to be spectral if it satisfies all of the following conditions:
• X is compact and T0.
• K$\circ $(X) is a basis of open subsets of X.
• K$\circ $(X) is closed under finite intersections.
• X is sober, i.e., every nonempty irreducible closed subset of X has a (necessarily unique) generic point.
Equivalent descriptions
Let X be a topological space. Each of the following properties are equivalent to the property of X being spectral:
1. X is homeomorphic to a projective limit of finite T0-spaces.
2. X is homeomorphic to the spectrum of a bounded distributive lattice L. In this case, L is isomorphic (as a bounded lattice) to the lattice K$\circ $(X) (this is called Stone representation of distributive lattices).
3. X is homeomorphic to the spectrum of a commutative ring.
4. X is the topological space determined by a Priestley space.
5. X is a T0 space whose frame of open sets is coherent (and every coherent frame comes from a unique spectral space in this way).
Properties
Let X be a spectral space and let K$\circ $(X) be as before. Then:
• K$\circ $(X) is a bounded sublattice of subsets of X.
• Every closed subspace of X is spectral.
• An arbitrary intersection of compact and open subsets of X (hence of elements from K$\circ $(X)) is again spectral.
• X is T0 by definition, but in general not T1.[1] In fact a spectral space is T1 if and only if it is Hausdorff (or T2) if and only if it is a boolean space if and only if K$\circ $(X) is a boolean algebra.
• X can be seen as a pairwise Stone space.[2]
Spectral maps
A spectral map f: X → Y between spectral spaces X and Y is a continuous map such that the preimage of every open and compact subset of Y under f is again compact.
The category of spectral spaces, which has spectral maps as morphisms, is dually equivalent to the category of bounded distributive lattices (together with morphisms of such lattices).[3] In this anti-equivalence, a spectral space X corresponds to the lattice K$\circ $(X).
Citations
1. A.V. Arkhangel'skii, L.S. Pontryagin (Eds.) General Topology I (1990) Springer-Verlag ISBN 3-540-18178-4 (See example 21, section 2.6.)
2. G. Bezhanishvili, N. Bezhanishvili, D. Gabelaia, A. Kurz, (2010). "Bitopological duality for distributive lattices and Heyting algebras." Mathematical Structures in Computer Science, 20.
3. Johnstone 1982.
References
• M. Hochster (1969). Prime ideal structure in commutative rings. Trans. Amer. Math. Soc., 142 43—60
• Johnstone, Peter (1982), "II.3 Coherent locales", Stone Spaces, Cambridge University Press, pp. 62–69, ISBN 978-0-521-33779-3.
• Dickmann, Max; Schwartz, Niels; Tressl, Marcus (2019). Spectral Spaces. New Mathematical Monographs. Vol. 35. Cambridge: Cambridge University Press. doi:10.1017/9781316543870. ISBN 9781107146723.
|
Wikipedia
|
Spectral theory of compact operators
In functional analysis, compact operators are linear operators on Banach spaces that map bounded sets to relatively compact sets. In the case of a Hilbert space H, the compact operators are the closure of the finite rank operators in the uniform operator topology. In general, operators on infinite-dimensional spaces feature properties that do not appear in the finite-dimensional case, i.e. for matrices. The compact operators are notable in that they share as much similarity with matrices as one can expect from a general operator. In particular, the spectral properties of compact operators resemble those of square matrices.
This article first summarizes the corresponding results from the matrix case before discussing the spectral properties of compact operators. The reader will see that most statements transfer verbatim from the matrix case.
The spectral theory of compact operators was first developed by F. Riesz.
Spectral theory of matrices
Further information: Jordan canonical form
The classical result for square matrices is the Jordan canonical form, which states the following:
Theorem. Let A be an n × n complex matrix, i.e. A a linear operator acting on Cn. If λ1...λk are the distinct eigenvalues of A, then Cn can be decomposed into the invariant subspaces of A
$\mathbf {C} ^{n}=\bigoplus _{i=1}^{k}Y_{i}.$
The subspace Yi = Ker(λi − A)m where Ker(λi − A)m = Ker(λi − A)m+1. Furthermore, the poles of the resolvent function ζ → (ζ − A)−1 coincide with the set of eigenvalues of A.
Compact operators
Statement
Theorem — Let X be a Banach space, C be a compact operator acting on X, and σ(C) be the spectrum of C.
1. Every nonzero λ ∈ σ(C) is an eigenvalue of C.
2. For all nonzero λ ∈ σ(C), there exist m such that Ker((λ − C)m) = Ker((λ − C)m+1), and this subspace is finite-dimensional.
3. The eigenvalues can only accumulate at 0. If the dimension of X is not finite, then σ(C) must contain 0.
4. σ(C) is at most countably infinite.
5. Every nonzero λ ∈ σ(C) is a pole of the resolvent function ζ → (ζ − C)−1.
Proof
Preliminary Lemmas
The theorem claims several properties of the operator λ − C where λ ≠ 0. Without loss of generality, it can be assumed that λ = 1. Therefore we consider I − C, I being the identity operator. The proof will require two lemmas.
Lemma 1 (Riesz's lemma) — Let X be a Banach space and Y ⊂ X, Y ≠ X, be a closed subspace. For all ε > 0, there exists x ∈ X such that $\|$x$\|$ = 1 and
$1-\varepsilon \leq d(x,Y)\leq 1$
where d(x, Y) is the distance from x to Y.
This fact will be used repeatedly in the argument leading to the theorem. Notice that when X is a Hilbert space, the lemma is trivial.
Lemma 2 — If C is compact, then Ran(I − C) is closed.
Proof
Let (I − C)xn → y in norm. If {xn} is bounded, then compactness of C implies that there exists a subsequence xnk such that C xnk is norm convergent. So xnk = (I - C)xnk + C xnk is norm convergent, to some x. This gives (I − C)xnk → (I − C)x = y. The same argument goes through if the distances d(xn, Ker(I − C)) is bounded.
But d(xn, Ker(I − C)) must be bounded. Suppose this is not the case. Pass now to the quotient map of (I − C), still denoted by (I − C), on X/Ker(I − C). The quotient norm on X/Ker(I − C) is still denoted by $\|$·$\|$, and {xn} are now viewed as representatives of their equivalence classes in the quotient space. Take a subsequence {xnk} such that $\|$xnk$\|$ > k and define a sequence of unit vectors by znk = xnk/$\|$xnk$\|$. Again we would have (I − C)znk → (I − C)z for some z. Since $\|$(I − C)znk$\|$ = $\|$(I − C)xnk$\|$/ $\|$xnk$\|$ → 0, we have (I − C)z = 0 i.e. z ∈ Ker(I − C). Since we passed to the quotient map, z = 0. This is impossible because z is the norm limit of a sequence of unit vectors. Thus the lemma is proven.
Concluding the Proof
Proof
i) Without loss of generality, assume λ = 1. λ ∈ σ(C) not being an eigenvalue means (I − C) is injective but not surjective. By Lemma 2, Y1 = Ran(I − C) is a closed proper subspace of X. Since (I − C) is injective, Y2 = (I − C)Y1 is again a closed proper subspace of Y1. Define Yn = Ran(I − C)n. Consider the decreasing sequence of subspaces
$Y_{1}\supset \cdots \supset Y_{n}\cdots \supset Y_{m}\cdots $
where all inclusions are proper. By lemma 1, we can choose unit vectors yn ∈ Yn such that d(yn, Yn+1) > ½. Compactness of C means {C yn} must contain a norm convergent subsequence. But for n < m
$\left\|Cy_{n}-Cy_{m}\right\|=\left\|(C-I)y_{n}+y_{n}-(C-I)y_{m}-y_{m}\right\|$
and notice that
$(C-I)y_{n}-(C-I)y_{m}-y_{m}\in Y_{n+1},$
which implies $\|$Cyn − Cym$\|$ > ½. This is a contradiction, and so λ must be an eigenvalue.
ii) The sequence { Yn = Ker(λi − A)n} is an increasing sequence of closed subspaces. The theorem claims it stops. Suppose it does not stop, i.e. the inclusion Ker(λi − A)n ⊂ Ker(λi − A)n+1 is proper for all n. By lemma 1, there exists a sequence {yn}n ≥ 2 of unit vectors such that yn ∈ Yn and d(yn, Yn − 1) > ½. As before, compactness of C means {C yn} must contain a norm convergent subsequence. But for n < m
$\|Cy_{n}-Cy_{m}\|=\|(C-I)y_{n}+y_{n}-(C-I)y_{m}-y_{m}\|$
and notice that
$(C-I)y_{n}+y_{n}-(C-I)y_{m}\in Y_{m-1},$
which implies $\|$Cyn − Cym$\|$ > ½. This is a contradiction, and so the sequence { Yn = Ker(λi − A)n} must terminate at some finite m.
Using the definition of the Kernel, we can show that the unit sphere of Ker(λi − C) is compact, so that Ker(λi − C) is finite-dimensional. Ker(λi − C)n is finite-dimensional for the same reason.
iii) Suppose there exist infinite (at least countable) distinct eigenvalues {λn}, with corresponding eigenvectors {xn}, such that $|$λn$|$ > ε for all n. Define Yn = span{x1...xn}. The sequence {Yn} is a strictly increasing sequence. Choose unit vectors such that yn ∈ Yn and d(yn, Yn − 1) > ½. Then for n < m
$\left\|Cy_{n}-Cy_{m}\right\|=\left\|(C-\lambda _{n})y_{n}+\lambda _{n}y_{n}-(C-\lambda _{m})y_{m}-\lambda _{m}y_{m}\right\|.$
But
$(C-\lambda _{n})y_{n}+\lambda _{n}y_{n}-(C-\lambda _{m})y_{m}\in Y_{m-1},$
therefore $\|$Cyn − Cym$\|$ > ε/2, a contradiction.
So we have that there are only finite distinct eigenvalues outside any ball centered at zero. This immediately gives us that zero is the only possible limit point of eigenvalues and there are at most countable distinct eigenvalues (see iv).
iv) This is an immediate consequence of iii). The set of eigenvalues {λ} is the union
$\bigcup _{n}\left\{|\lambda |>{\tfrac {1}{n}}\right\}=\bigcup _{n}S_{n}.$
Because σ(C) is a bounded set and the eigenvalues can only accumulate at 0, each Sn is finite, which gives the desired result.
v) As in the matrix case, this is a direct application of the holomorphic functional calculus.
Invariant subspaces
As in the matrix case, the above spectral properties lead to a decomposition of X into invariant subspaces of a compact operator C. Let λ ≠ 0 be an eigenvalue of C; so λ is an isolated point of σ(C). Using the holomorphic functional calculus, define the Riesz projection E(λ) by
$E(\lambda )={1 \over 2\pi i}\int _{\gamma }(\xi -C)^{-1}d\xi $
where γ is a Jordan contour that encloses only λ from σ(C). Let Y be the subspace Y = E(λ)X. C restricted to Y is a compact invertible operator with spectrum {λ}, therefore Y is finite-dimensional. Let ν be such that Ker(λ − C)ν = Ker(λ − C)ν + 1. By inspecting the Jordan form, we see that (λ − C)ν = 0 while (λ − C)ν − 1 ≠ 0. The Laurent series of the resolvent mapping centered at λ shows that
$E(\lambda )(\lambda -C)^{\nu }=(\lambda -C)^{\nu }E(\lambda )=0.$
So Y = Ker(λ − C)ν.
The E(λ) satisfy E(λ)2 = E(λ), so that they are indeed projection operators or spectral projections. By definition they commute with C. Moreover E(λ)E(μ) = 0 if λ ≠ μ.
• Let X(λ) = E(λ)X if λ is a non-zero eigenvalue. Thus X(λ) is a finite-dimensional invariant subspace, the generalised eigenspace of λ.
• Let X(0) be the intersection of the kernels of the E(λ). Thus X(0) is a closed subspace invariant under C and the restriction of C to X(0) is a compact operator with spectrum {0}.
Operators with compact power
If B is an operator on a Banach space X such that Bn is compact for some n, then the theorem proven above also holds for B.
See also
• Spectral theorem
• Spectral theory of normal C*-algebras
References
• John B. Conway, A course in functional analysis, Graduate Texts in Mathematics 96, Springer 1990. ISBN 0-387-97245-5
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectral theory
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces.[1] It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations.[2] The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.[3]
Mathematical background
The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics."[4]
There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann.[5] The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis.
The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality.
One can also study the spectral properties of operators on Banach spaces. For example, compact operators on Banach spaces have many spectral properties similar to that of matrices.
Physical background
The background in the physics of vibrations has been explained in this way:[6]
Spectral theory is connected with the investigation of localized vibrations of a variety of different objects, from atoms and molecules in chemistry to obstacles in acoustic waveguides. These vibrations have frequencies, and the issue is to decide when such localized vibrations occur, and how to go about computing the frequencies. This is a very complicated problem since every object has not only a fundamental tone but also a complicated series of overtones, which vary radically from one body to another.
Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum?). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl. The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz.[7][8] It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation, that the connection was made to atomic spectra; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré, but rejected for simple quantitative reasons, absent an explanation of the Balmer series.[9] The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory.
A definition of spectrum
Main article: Spectrum (functional analysis)
Consider a bounded linear transformation T defined everywhere over a general Banach space. We form the transformation:
$R_{\zeta }=\left(\zeta I-T\right)^{-1}.$
Here I is the identity operator and ζ is a complex number. The inverse of an operator T, that is T−1, is defined by:
$TT^{-1}=T^{-1}T=I.$
If the inverse exists, T is called regular. If it does not exist, T is called singular.
With these definitions, the resolvent set of T is the set of all complex numbers ζ such that Rζ exists and is bounded. This set often is denoted as ρ(T). The spectrum of T is the set of all complex numbers ζ such that Rζ fails to exist or is unbounded. Often the spectrum of T is denoted by σ(T). The function Rζ for all ζ in ρ(T) (that is, wherever Rζ exists as a bounded operator) is called the resolvent of T. The spectrum of T is therefore the complement of the resolvent set of T in the complex plane.[10] Every eigenvalue of T belongs to σ(T), but σ(T) may contain non-eigenvalues.[11]
This definition applies to a Banach space, but of course other types of space exist as well; for example, topological vector spaces include Banach spaces, but can be more general.[12][13] On the other hand, Banach spaces include Hilbert spaces, and it is these spaces that find the greatest application and the richest theoretical results.[14] With suitable restrictions, much can be said about the structure of the spectra of transformations in a Hilbert space. In particular, for self-adjoint operators, the spectrum lies on the real line and (in general) is a spectral combination of a point spectrum of discrete eigenvalues and a continuous spectrum.[15]
Spectral theory briefly
Main article: Spectral theorem
See also: Eigenvalue, eigenvector and eigenspace
In functional analysis and linear algebra the spectral theorem establishes conditions under which an operator can be expressed in simple form as a sum of simpler operators. As a full rigorous presentation is not appropriate for this article, we take an approach that avoids much of the rigor and satisfaction of a formal treatment with the aim of being more comprehensible to a non-specialist.
This topic is easiest to describe by introducing the bra–ket notation of Dirac for operators.[16][17] As an example, a very particular linear operator L might be written as a dyadic product:[18][19]
$L=|k_{1}\rangle \langle b_{1}|,$
in terms of the "bra" ⟨b1| and the "ket" |k1⟩. A function f is described by a ket as |f ⟩. The function f(x) defined on the coordinates $(x_{1},x_{2},x_{3},\dots )$ is denoted as
$f(x)=\langle x,f\rangle $
and the magnitude of f by
$\|f\|^{2}=\langle f,f\rangle =\int \langle f,x\rangle \langle x,f\rangle \,dx=\int f^{*}(x)f(x)\,dx$
where the notation (*) denotes a complex conjugate. This inner product choice defines a very specific inner product space, restricting the generality of the arguments that follow.[14]
The effect of L upon a function f is then described as:
$L|f\rangle =|k_{1}\rangle \langle b_{1}|f\rangle $
expressing the result that the effect of L on f is to produce a new function $|k_{1}\rangle $ multiplied by the inner product represented by $\langle b_{1}|f\rangle $.
A more general linear operator L might be expressed as:
$L=\lambda _{1}|e_{1}\rangle \langle f_{1}|+\lambda _{2}|e_{2}\rangle \langle f_{2}|+\lambda _{3}|e_{3}\rangle \langle f_{3}|+\dots ,$
where the $\{\,\lambda _{i}\,\}$ are scalars and the $\{\,|e_{i}\rangle \,\}$ are a basis and the $\{\,\langle f_{i}|\,\}$ a reciprocal basis for the space. The relation between the basis and the reciprocal basis is described, in part, by:
$\langle f_{i}|e_{j}\rangle =\delta _{ij}$
If such a formalism applies, the $\{\,\lambda _{i}\,\}$ are eigenvalues of L and the functions $\{\,|e_{i}\rangle \,\}$ are eigenfunctions of L. The eigenvalues are in the spectrum of L.[20]
Some natural questions are: under what circumstances does this formalism work, and for what operators L are expansions in series of other operators like this possible? Can any function f be expressed in terms of the eigenfunctions (are they a Schauder basis) and under what circumstances does a point spectrum or a continuous spectrum arise? How do the formalisms for infinite-dimensional spaces and finite-dimensional spaces differ, or do they differ? Can these ideas be extended to a broader class of spaces? Answering such questions is the realm of spectral theory and requires considerable background in functional analysis and matrix algebra.
Resolution of the identity
See also: Borel functional calculus § Resolution of the identity
This section continues in the rough and ready manner of the above section using the bra–ket notation, and glossing over the many important details of a rigorous treatment.[21] A rigorous mathematical treatment may be found in various references.[22] In particular, the dimension n of the space will be finite.
Using the bra–ket notation of the above section, the identity operator may be written as:
$I=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|$
where it is supposed as above that $\{|e_{i}\rangle \}$ are a basis and the $\{\langle f_{i}|\}$ a reciprocal basis for the space satisfying the relation:
$\langle f_{i}|e_{j}\rangle =\delta _{ij}.$
This expression of the identity operation is called a representation or a resolution of the identity.[21][22] This formal representation satisfies the basic property of the identity:
$I^{k}=I$
valid for every positive integer k.
Applying the resolution of the identity to any function in the space $|\psi \rangle $, one obtains:
$I|\psi \rangle =|\psi \rangle =\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\psi \rangle =\sum _{i=1}^{n}c_{i}|e_{i}\rangle $
which is the generalized Fourier expansion of ψ in terms of the basis functions { ei }.[23] Here $c_{i}=\langle f_{i}|\psi \rangle $.
Given some operator equation of the form:
$O|\psi \rangle =|h\rangle $
with h in the space, this equation can be solved in the above basis through the formal manipulations:
$O|\psi \rangle =\sum _{i=1}^{n}c_{i}\left(O|e_{i}\rangle \right)=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|h\rangle ,$
$\langle f_{j}|O|\psi \rangle =\sum _{i=1}^{n}c_{i}\langle f_{j}|O|e_{i}\rangle =\sum _{i=1}^{n}\langle f_{j}|e_{i}\rangle \langle f_{i}|h\rangle =\langle f_{j}|h\rangle ,\quad \forall j$
which converts the operator equation to a matrix equation determining the unknown coefficients cj in terms of the generalized Fourier coefficients $\langle f_{j}|h\rangle $ of h and the matrix elements $O_{ji}=\langle f_{j}|O|e_{i}\rangle $ of the operator O.
The role of spectral theory arises in establishing the nature and existence of the basis and the reciprocal basis. In particular, the basis might consist of the eigenfunctions of some linear operator L:
$L|e_{i}\rangle =\lambda _{i}|e_{i}\rangle \,;$
with the { λi } the eigenvalues of L from the spectrum of L. Then the resolution of the identity above provides the dyad expansion of L:
$LI=L=\sum _{i=1}^{n}L|e_{i}\rangle \langle f_{i}|=\sum _{i=1}^{n}\lambda _{i}|e_{i}\rangle \langle f_{i}|.$
Resolvent operator
Main article: Resolvent formalism
See also: Green's function and Dirac delta function
Using spectral theory, the resolvent operator R:
$R=(\lambda I-L)^{-1},\,$
can be evaluated in terms of the eigenfunctions and eigenvalues of L, and the Green's function corresponding to L can be found.
Applying R to some arbitrary function in the space, say $\varphi $,
$R|\varphi \rangle =(\lambda I-L)^{-1}|\varphi \rangle =\sum _{i=1}^{n}{\frac {1}{\lambda -\lambda _{i}}}|e_{i}\rangle \langle f_{i}|\varphi \rangle .$
This function has poles in the complex λ-plane at each eigenvalue of L. Thus, using the calculus of residues:
${\frac {1}{2\pi i}}\oint _{C}R|\varphi \rangle d\lambda =-\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\varphi \rangle =-|\varphi \rangle ,$
where the line integral is over a contour C that includes all the eigenvalues of L.
Suppose our functions are defined over some coordinates {xj}, that is:
$\langle x,\varphi \rangle =\varphi (x_{1},x_{2},...).$
Introducing the notation
$\langle x,y\rangle =\delta (x-y),$
where δ(x − y) = δ(x1 − y1, x2 − y2, x3 − y3, ...) is the Dirac delta function,[24] we can write
$\langle x,\varphi \rangle =\int \langle x,y\rangle \langle y,\varphi \rangle dy.$
Then:
${\begin{aligned}\left\langle x,{\frac {1}{2\pi i}}\oint _{C}{\frac {\varphi }{\lambda I-L}}d\lambda \right\rangle &={\frac {1}{2\pi i}}\oint _{C}d\lambda \left\langle x,{\frac {\varphi }{\lambda I-L}}\right\rangle \\&={\frac {1}{2\pi i}}\oint _{C}d\lambda \int dy\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \langle y,\varphi \rangle \end{aligned}}$
The function G(x, y; λ) defined by:
${\begin{aligned}G(x,y;\lambda )&=\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \\&=\sum _{i=1}^{n}\sum _{j=1}^{n}\langle x,e_{i}\rangle \left\langle f_{i},{\frac {e_{j}}{\lambda I-L}}\right\rangle \langle f_{j},y\rangle \\&=\sum _{i=1}^{n}{\frac {\langle x,e_{i}\rangle \langle f_{i},y\rangle }{\lambda -\lambda _{i}}}\\&=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(y)}{\lambda -\lambda _{i}}},\end{aligned}}$
is called the Green's function for operator L, and satisfies:[25]
${\frac {1}{2\pi i}}\oint _{C}G(x,y;\lambda )\,d\lambda =-\sum _{i=1}^{n}\langle x,e_{i}\rangle \langle f_{i},y\rangle =-\langle x,y\rangle =-\delta (x-y).$
Operator equations
See also: Spectral theory of ordinary differential equations and Integral equation
Consider the operator equation:
$(O-\lambda I)|\psi \rangle =|h\rangle ;$ ;}
in terms of coordinates:
$\int \langle x,(O-\lambda I)y\rangle \langle y,\psi \rangle \,dy=h(x).$
A particular case is λ = 0.
The Green's function of the previous section is:
$\langle y,G(\lambda )z\rangle =\left\langle y,(O-\lambda I)^{-1}z\right\rangle =G(y,z;\lambda ),$
and satisfies:
$\int \langle x,(O-\lambda I)y\rangle \langle y,G(\lambda )z\rangle \,dy=\int \langle x,(O-\lambda I)y\rangle \left\langle y,(O-\lambda I)^{-1}z\right\rangle \,dy=\langle x,z\rangle =\delta (x-z).$
Using this Green's function property:
$\int \langle x,(O-\lambda I)y\rangle G(y,z;\lambda )\,dy=\delta (x-z).$
Then, multiplying both sides of this equation by h(z) and integrating:
$\int dz\,h(z)\int dy\,\langle x,(O-\lambda I)y\rangle G(y,z;\lambda )=\int dy\,\langle x,(O-\lambda I)y\rangle \int dz\,h(z)G(y,z;\lambda )=h(x),$
which suggests the solution is:
$\psi (x)=\int h(z)G(x,z;\lambda )\,dz.$
That is, the function ψ(x) satisfying the operator equation is found if we can find the spectrum of O, and construct G, for example by using:
$G(x,z;\lambda )=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(z)}{\lambda -\lambda _{i}}}.$
There are many other ways to find G, of course.[26] See the articles on Green's functions and on Fredholm integral equations. It must be kept in mind that the above mathematics is purely formal, and a rigorous treatment involves some pretty sophisticated mathematics, including a good background knowledge of functional analysis, Hilbert spaces, distributions and so forth. Consult these articles and the references for more detail.
Spectral theorem and Rayleigh quotient
Optimization problems may be the most useful examples about the combinatorial significance of the eigenvalues and eigenvectors in symmetric matrices, especially for the Rayleigh quotient with respect to a matrix M.
Theorem Let M be a symmetric matrix and let x be the non-zero vector that maximizes the Rayleigh quotient with respect to M. Then, x is an eigenvector of M with eigenvalue equal to the Rayleigh quotient. Moreover, this eigenvalue is the largest eigenvalue of M.
Proof Assume the spectral theorem. Let the eigenvalues of M be $\lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}$. Since the $\{v_{i}\}$ form an orthonormal basis, any vector x can be expressed in this basis as
$x=\sum _{i}v_{i}^{T}xv_{i}$
The way to prove this formula is pretty easy. Namely,
${\begin{aligned}v_{j}^{T}\sum _{i}v_{i}^{T}xv_{i}={}&\sum _{i}v_{i}^{T}xv_{j}^{T}v_{i}\\[4pt]={}&(v_{j}^{T}x)v_{j}^{T}v_{j}\\[4pt]={}&v_{j}^{T}x\end{aligned}}$
evaluate the Rayleigh quotient with respect to x:
${\begin{aligned}x^{T}Mx={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}\right)^{T}M\left(\sum _{j}(v_{j}^{T}x)v_{j}\right)\\[4pt]={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}^{T}\right)\left(\sum _{j}(v_{j}^{T}x)v_{j}\lambda _{j}\right)\\[4pt]={}&\sum _{i,j}(v_{i}^{T}x)v_{i}^{T}(v_{j}^{T}x)v_{j}\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)(v_{j}^{T}x)\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)^{2}\lambda _{j}\leq \lambda _{n}\sum _{j}(v_{j}^{T}x)^{2}\\[4pt]={}&\lambda _{n}x^{T}x,\end{aligned}}$
where we used Parseval's identity in the last line. Finally we obtain that
${\frac {x^{T}Mx}{x^{T}x}}\leq \lambda _{n}$
so the Rayleigh quotient is always less than $\lambda _{n}$.[27]
See also
• Functions of operators, Operator theory
• Lax pairs
• Least-squares spectral analysis
• Riesz projector
• Self-adjoint operator
• Spectrum (functional analysis), Resolvent formalism, Decomposition of spectrum (functional analysis)
• Spectral radius, Spectrum of an operator, Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
• Sturm–Liouville theory, Integral equations, Fredholm theory
• Compact operators, Isospectral operators, Completeness
• Spectral geometry
• Spectral graph theory
• List of functional analysis topics
Notes
1. Jean Alexandre Dieudonné (1981). History of functional analysis. Elsevier. ISBN 0-444-86148-3.
2. William Arveson (2002). "Chapter 1: spectral theory and Banach algebras". A short course on spectral theory. Springer. ISBN 0-387-95300-0.
3. Viktor Antonovich Sadovnichiĭ (1991). "Chapter 4: The geometry of Hilbert space: the spectral theory of operators". Theory of Operators. Springer. p. 181 et seq. ISBN 0-306-11028-8.
4. Steen, Lynn Arthur. "Highlights in the History of Spectral Theory" (PDF). St. Olaf College. St. Olaf College. Archived from the original (PDF) on 4 March 2016. Retrieved 14 December 2015.
5. John von Neumann (1996). The mathematical foundations of quantum mechanics; Volume 2 in Princeton Landmarks in Mathematics series (Reprint of translation of original 1932 ed.). Princeton University Press. ISBN 0-691-02893-1.
6. E. Brian Davies, quoted on the King's College London analysis group website "Research at the analysis group".
7. Nicholas Young (1988). An introduction to Hilbert space. Cambridge University Press. p. 3. ISBN 0-521-33717-8.
8. Jean-Luc Dorier (2000). On the teaching of linear algebra; Vol. 23 of Mathematics education library. Springer. ISBN 0-7923-6539-9.
9. Cf. Spectra in mathematics and in physics Archived 2011-07-27 at the Wayback Machine by Jean Mawhin, p.4 and pp. 10-11.
10. Edgar Raymond Lorch (2003). Spectral Theory (Reprint of Oxford 1962 ed.). Textbook Publishers. p. 89. ISBN 0-7581-7156-0.
11. Nicholas Young (1988-07-21). op. cit. p. 81. ISBN 0-521-33717-8.
12. Helmut H. Schaefer; Manfred P. H. Wolff (1999). Topological vector spaces (2nd ed.). Springer. p. 36. ISBN 0-387-98726-6.
13. Dmitriĭ Petrovich Zhelobenko (2006). Principal structures and methods of representation theory. American Mathematical Society. ISBN 0821837311.
14. Edgar Raymond Lorch (2003). "Chapter III: Hilbert Space". Spectral Theory. p. 57. ISBN 0-7581-7156-0.
15. Edgar Raymond Lorch (2003). "Chapter V: The Structure of Self-Adjoint Transformations". Spectral Theory. p. 106 ff. ISBN 0-7581-7156-0.
16. Bernard Friedman (1990). Principles and Techniques of Applied Mathematics (Reprint of 1956 Wiley ed.). Dover Publications. p. 26. ISBN 0-486-66444-9.
17. PAM Dirac (1981). The principles of quantum mechanics (4th ed.). Oxford University Press. p. 29 ff. ISBN 0-19-852011-5.
18. Jürgen Audretsch (2007). "Chapter 1.1.2: Linear operators on the Hilbert space". Entangled systems: new directions in quantum physics. Wiley-VCH. p. 5. ISBN 978-3-527-40684-5.
19. R. A. Howland (2006). Intermediate dynamics: a linear algebraic approach (2nd ed.). Birkhäuser. p. 69 ff. ISBN 0-387-28059-6.
20. Bernard Friedman (1990). "Chapter 2: Spectral theory of operators". op. cit. p. 57. ISBN 0-486-66444-9.
21. See discussion in Dirac's book referred to above, and Milan Vujičić (2008). Linear algebra thoroughly explained. Springer. p. 274. ISBN 978-3-540-74637-9.
22. See, for example, the fundamental text of John von Neumann (1955). op. cit. ISBN 0-691-02893-1. and Arch W. Naylor, George R. Sell (2000). Linear Operator Theory in Engineering and Science; Vol. 40 of Applied mathematical science. Springer. p. 401. ISBN 0-387-95001-X., Steven Roman (2008). Advanced linear algebra (3rd ed.). Springer. ISBN 978-0-387-72828-5., I︠U︡riĭ Makarovich Berezanskiĭ (1968). Expansions in eigenfunctions of selfadjoint operators; Vol. 17 in Translations of mathematical monographs. American Mathematical Society. ISBN 0-8218-1567-9.
23. See for example, Gerald B Folland (2009). "Convergence and completeness". Fourier Analysis and its Applications (Reprint of Wadsworth & Brooks/Cole 1992 ed.). American Mathematical Society. pp. 77 ff. ISBN 978-0-8218-4790-9.
24. PAM Dirac (1981). op. cit. p. 60 ff. ISBN 0-19-852011-5.
25. Bernard Friedman (1956). op. cit. p. 214, Eq. 2.14. ISBN 0-486-66444-9.
26. For example, see Sadri Hassani (1999). "Chapter 20: Green's functions in one dimension". Mathematical physics: a modern introduction to its foundations. Springer. p. 553 et seq. ISBN 0-387-98579-4. and Qing-Hua Qin (2007). Green's function and boundary elements of multifield materials. Elsevier. ISBN 978-0-08-045134-3.
27. Spielman, Daniel A. "Lecture Notes on Spectral Graph Theory" Yale University (2012) http://cs.yale.edu/homes/spielman/561/ .
References
• Edward Brian Davies (1996). Spectral Theory and Differential Operators; Volume 42 in the Cambridge Studies in Advanced Mathematics. Cambridge University Press. ISBN 0-521-58710-7.
• Nelson Dunford; Jacob T Schwartz (1988). Linear Operators, Spectral Theory, Self Adjoint Operators in Hilbert Space (Part 2) (Paperback reprint of 1967 ed.). Wiley. ISBN 0-471-60847-5.{{cite book}}: CS1 maint: multiple names: authors list (link)
• Nelson Dunford; Jacob T Schwartz (1988). Linear Operators, Spectral Operators (Part 3) (Paperback reprint of 1971 ed.). Wiley. ISBN 0-471-60846-7.{{cite book}}: CS1 maint: multiple names: authors list (link)
• Sadri Hassani (1999). "Chapter 4: Spectral decomposition". Mathematical Physics: a Modern Introduction to its Foundations. Springer. ISBN 0-387-98579-4.
• "Spectral theory of linear operators", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Shmuel Kantorovitz (1983). Spectral Theory of Banach Space Operators;. Springer.
• Arch W. Naylor, George R. Sell (2000). "Chapter 5, Part B: The Spectrum". Linear Operator Theory in Engineering and Science; Volume 40 of Applied mathematical sciences. Springer. p. 411. ISBN 0-387-95001-X.
• Gerald Teschl (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. American Mathematical Society. ISBN 978-0-8218-4660-5.
• Valter Moretti (2018). Spectral Theory and Quantum Mechanics; Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation 2nd Edition. Springer. ISBN 978-3-319-70705-1.
External links
• Evans M. Harrell II: A Short History of Operator Theory
• Gregory H. Moore (1995). "The axiomatization of linear algebra: 1875-1940". Historia Mathematica. 22 (3): 262–303. doi:10.1006/hmat.1995.1025.
• Steen, L. A. (April 1973). "Highlights in the History of Spectral Theory". The American Mathematical Monthly. 80 (4): 359–381. doi:10.2307/2319079. JSTOR 2319079.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
Authority control: National
• France
• BnF data
• Israel
• United States
• Czech Republic
|
Wikipedia
|
Spectral theory of normal C*-algebras
In functional analysis, every C*-algebra is isomorphic to a subalgebra of the C*-algebra ${\mathcal {B}}(H)$ of bounded linear operators on some Hilbert space $H.$ This article describes the spectral theory of closed normal subalgebras of ${\mathcal {B}}(H)$. A subalgebra $A$ of ${\mathcal {B}}(H)$ is called normal if it is commutative and closed under the $\ast $ operation: for all $x,y\in A$, we have $x^{\ast }\in A$ and that $xy=yx$.[1]
Resolution of identity
See also: Projection-valued measure
Throughout, $H$ is a fixed Hilbert space.
A projection-valued measure on a measurable space $(X,\Omega ),$ where $\Omega $ is a σ-algebra of subsets of $X,$ is a mapping $\pi :\Omega \to {\mathcal {B}}(H)$ :\Omega \to {\mathcal {B}}(H)} such that for all $\omega \in \Omega ,$ $\pi (\omega )$ is a self-adjoint projection on $H$ (that is, $\pi (\omega )$ is a bounded linear operator $\pi (\omega ):H\to H$ that satisfies $\pi (\omega )=\pi (\omega )^{*}$ and $\pi (\omega )\circ \pi (\omega )=\pi (\omega )$) such that
$\pi (X)=\operatorname {Id} _{H}\quad $
(where $\operatorname {Id} _{H}$ is the identity operator of $H$) and for every $x,y\in H,$ the function $\Omega \to \mathbb {C} $ defined by $\omega \mapsto \langle \pi (\omega )x,y\rangle $ is a complex measure on $M$ (that is, a complex-valued countably additive function).
A resolution of identity[2] on a measurable space $(X,\Omega )$ is a function $\pi :\Omega \to {\mathcal {B}}(H)$ :\Omega \to {\mathcal {B}}(H)} such that for every $\omega _{1},\omega _{2}\in \Omega $:
1. $\pi (\varnothing )=0$;
2. $\pi (X)=\operatorname {Id} _{H}$;
3. for every $\omega \in \Omega ,$ $\pi (\omega )$ is a self-adjoint projection on $H$;
4. for every $x,y\in H,$ the map $\pi _{x,y}:\Omega \to \mathbb {C} $ defined by $\pi _{x,y}(\omega )=\langle \pi (\omega )x,y\rangle $ is a complex measure on $\Omega $;
5. $\pi \left(\omega _{1}\cap \omega _{2}\right)=\pi \left(\omega _{1}\right)\circ \pi \left(\omega _{2}\right)$;
6. if $\omega _{1}\cap \omega _{2}=\varnothing $ then $\pi \left(\omega _{1}\cup \omega _{2}\right)=\pi \left(\omega _{1}\right)+\pi \left(\omega _{2}\right)$;
If $\Omega $ is the $\sigma $-algebra of all Borels sets on a Hausdorff locally compact (or compact) space, then the following additional requirement is added:
1. for every $x,y\in H,$ the map $\pi _{x,y}:\Omega \to \mathbb {C} $ is a regular Borel measure (this is automatically satisfied on compact metric spaces).
Conditions 2, 3, and 4 imply that $\pi $ is a projection-valued measure.
Properties
Throughout, let $\pi $ be a resolution of identity. For all $x\in H,$ $\pi _{x,x}:\Omega \to \mathbb {C} $ is a positive measure on $\Omega $ with total variation $\left\|\pi _{x,x}\right\|=\pi _{x,x}(X)=\|x\|^{2}$ and that satisfies $\pi _{x,x}(\omega )=\langle \pi (\omega )x,x\rangle =\|\pi (\omega )x\|^{2}$ for all $\omega \in \Omega .$[2]
For every $\omega _{1},\omega _{2}\in \Omega $:
• $\pi \left(\omega _{1}\right)\pi \left(\omega _{2}\right)=\pi \left(\omega _{2}\right)\pi \left(\omega _{1}\right)$ (since both are equal to $\pi \left(\omega _{1}\cap \omega _{2}\right)$).[2]
• If $\omega _{1}\cap \omega _{2}=\varnothing $ then the ranges of the maps $\pi \left(\omega _{1}\right)$ and $\pi \left(\omega _{2}\right)$ are orthogonal to each other and $\pi \left(\omega _{1}\right)\pi \left(\omega _{2}\right)=0=\pi \left(\omega _{2}\right)\pi \left(\omega _{1}\right).$[2]
• $\pi :\Omega \to {\mathcal {B}}(H)$ :\Omega \to {\mathcal {B}}(H)} is finitely additive.[2]
• If $\omega _{1},\omega _{2},\ldots $ are pairwise disjoint elements of $\Omega $ whose union is $\omega $ and if $\pi \left(\omega _{i}\right)=0$ for all $i$ then $\pi (\omega )=0.$[2]
• However, $\pi :\Omega \to {\mathcal {B}}(H)$ :\Omega \to {\mathcal {B}}(H)} is countably additive only in trivial situations as is now described: suppose that $\omega _{1},\omega _{2},\ldots $ are pairwise disjoint elements of $\Omega $ whose union is $\omega $ and that the partial sums $\sum _{i=1}^{n}\pi \left(\omega _{i}\right)$ converge to $\pi (\omega )$ in ${\mathcal {B}}(H)$ (with its norm topology) as $n\to \infty $; then since the norm of any projection is either $0$ or $\geq 1,$ the partial sums cannot form a Cauchy sequence unless all but finitely many of the $\pi \left(\omega _{i}\right)$ are $0.$[2]
• For any fixed $x\in H,$ the map $\pi _{x}:\Omega \to H$ defined by $\pi _{x}(\omega ):=\pi (\omega )x$ is a countably additive $H$-valued measure on $\Omega .$
• Here countably additive means that whenever $\omega _{1},\omega _{2},\ldots $ are pairwise disjoint elements of $\Omega $ whose union is $\omega ,$ then the partial sums $\sum _{i=1}^{n}\pi \left(\omega _{i}\right)x$ converge to $\pi (\omega )x$ in $H.$ Said more succinctly, $\sum _{i=1}^{\infty }\pi \left(\omega _{i}\right)x=\pi (\omega )x.$[2]
• In other words, for every pairwise disjoint family of elements $\left(\omega _{i}\right)_{i=1}^{\infty }\subseteq \Omega $ whose union is $\omega _{\infty }\in \Omega $, then $\sum _{i=1}^{n}\pi \left(\omega _{i}\right)=\pi \left(\bigcup _{i=1}^{n}\omega _{i}\right)$ (by finite additivity of $\pi $) converges to $\pi \left(\omega _{\infty }\right)$ in the strong operator topology on ${\mathcal {B}}(H)$: for every $x\in H$, the sequence of elements $\sum _{i=1}^{n}\pi \left(\omega _{i}\right)x$ converges to $\pi \left(\omega _{\infty }\right)x$ in $H$ (with respect to the norm topology).
L∞(π) - space of essentially bounded function
The $\pi :\Omega \to {\mathcal {B}}(H)$ :\Omega \to {\mathcal {B}}(H)} be a resolution of identity on $(X,\Omega ).$
Essentially bounded functions
Suppose $f:X\to \mathbb {C} $ is a complex-valued $\Omega $-measurable function. There exists a unique largest open subset $V_{f}$ of $\mathbb {C} $ (ordered under subset inclusion) such that $\pi \left(f^{-1}\left(V_{f}\right)\right)=0.$[3] To see why, let $D_{1},D_{2},\ldots $ be a basis for $\mathbb {C} $'s topology consisting of open disks and suppose that $D_{i_{1}},D_{i_{2}},\ldots $ is the subsequence (possibly finite) consisting of those sets such that $\pi \left(f^{-1}\left(D_{i_{k}}\right)\right)=0$; then $D_{i_{1}}\cup D_{i_{2}}\cup \cdots =V_{f}.$ Note that, in particular, if $D$ is an open subset of $\mathbb {C} $ such that $D\cap \operatorname {Im} f=\varnothing $ then $\pi \left(f^{-1}(D)\right)=\pi (\varnothing )=0$ so that $D\subseteq V_{f}$ (although there are other ways in which $\pi \left(f^{-1}(D)\right)$ may equal 0). Indeed, $\mathbb {C} \setminus \operatorname {cl} (\operatorname {Im} f)\subseteq V_{f}.$
The essential range of $f$ is defined to be the complement of $V_{f}.$ It is the smallest closed subset of $\mathbb {C} $ that contains $f(x)$ for almost all $x\in X$ (that is, for all $x\in X$ except for those in some set $\omega \in \Omega $ such that $\pi (\omega )=0$).[3] The essential range is a closed subset of $\mathbb {C} $ so that if it is also a bounded subset of $\mathbb {C} $ then it is compact.
The function $f$ is essentially bounded if its essential range is bounded, in which case define its essential supremum, denoted by $\|f\|^{\infty },$ to be the supremum of all $|\lambda |$ as $\lambda $ ranges over the essential range of $f.$[3]
Space of essentially bounded functions
Let ${\mathcal {B}}(X,\Omega )$ be the vector space of all bounded complex-valued $\Omega $-measurable functions $f:X\to \mathbb {C} ,$ which becomes a Banach algebra when normed by $\|f\|_{\infty }:=\sup _{x\in X}|f(x)|.$ The function $\|\,\cdot \,\|^{\infty }$ is a seminorm on ${\mathcal {B}}(X,\Omega ),$ but not necessarily a norm. The kernel of this seminorm, $N^{\infty }:=\left\{f\in {\mathcal {B}}(X,\Omega ):\|f\|^{\infty }=0\right\},$ is a vector subspace of ${\mathcal {B}}(X,\Omega )$ that is a closed two-sided ideal of the Banach algebra $\left({\mathcal {B}}(X,\Omega ),\|\cdot \|_{\infty }\right).$[3] Hence the quotient of ${\mathcal {B}}(X,\Omega )$ by $N^{\infty }$ is also a Banach algebra, denoted by $L^{\infty }(\pi ):={\mathcal {B}}(X,\Omega )/N^{\infty }$ where the norm of any element $f+N^{\infty }\in L^{\infty }(\pi )$ is equal to $\|f\|^{\infty }$ (since if $f+N^{\infty }=g+N^{\infty }$ then $\|f\|^{\infty }=\|g\|^{\infty }$) and this norm makes $L^{\infty }(\pi )$ into a Banach algebra. The spectrum of $f+N^{\infty }$ in $L^{\infty }(\pi )$ is the essential range of $f.$[3] This article will follow the usual practice of writing $f$ rather than $f+N^{\infty }$ to represent elements of $L^{\infty }(\pi ).$
Theorem[3] — Let $\pi :\Omega \to {\mathcal {B}}(H)$ :\Omega \to {\mathcal {B}}(H)} be a resolution of identity on $(X,\Omega ).$ There exists a closed normal subalgebra $A$ of ${\mathcal {B}}(H)$ and an isometric *-isomorphism $\Psi :L^{\infty }(\pi )\to A$ satisfying the following properties:
1. $\langle \Psi (f)x,y\rangle =\int _{X}f\operatorname {d} \pi _{x,y}$ for all $x,y\in H$ and $f\in L^{\infty }(\pi ),$ which justifies the notation $\Psi (f)=\int _{X}f\operatorname {d} \pi $;
2. $\|\Psi (f)x\|^{2}=\int _{X}|f|^{2}\operatorname {d} \pi _{x,x}$ for all $x\in H$ and $f\in L^{\infty }(\pi )$;
3. an operator $R\in \mathbb {B} (H)$ commutes with every element of $\operatorname {Im} \pi $ if and only if it commutes with every element of $A=\operatorname {Im} \Psi .$
4. if $f$ is a simple function equal to $f=\sum _{i=1}^{n}\lambda _{i}\mathbb {1} _{\omega _{i}},$ where $\omega _{1},\ldots \omega _{n}$ is a partition of $X$ and the $\lambda _{i}$ are complex numbers, then $\Psi (f)=\sum _{i=1}^{n}\lambda _{i}\pi \left(\omega _{i}\right)$ (here $\mathbb {1} $ is the characteristic function);
5. if $f$ is the limit (in the norm of $L^{\infty }(\pi )$) of a sequence of simple functions $s_{1},s_{2},\ldots $ in $L^{\infty }(\pi )$ then $\left(\Psi \left(s_{i}\right)\right)_{i=1}^{\infty }$ converges to$\Psi (f)$ in ${\mathcal {B}}(H)$ and $\|\Psi (f)\|=\|f\|^{\infty }$;
6. $\left(\|f\|^{\infty }\right)^{2}=\sup _{\|h\|\leq 1}\int _{X}\operatorname {d} \pi _{h,h}$ for every $f\in L^{\infty }(\pi ).$
Spectral theorem
The maximal ideal space of a Banach algebra $A$ is the set of all complex homomorphisms $A\to \mathbb {C} ,$ which we'll denote by $\sigma _{A}.$ For every $T$ in $A,$ the Gelfand transform of $T$ is the map $G(T):\sigma _{A}\to \mathbb {C} $ defined by $G(T)(h):=h(T).$ $\sigma _{A}$ is given the weakest topology making every $G(T):\sigma _{A}\to \mathbb {C} $ continuous. With this topology, $\sigma _{A}$ is a compact Hausdorff space and every $T$ in $A,$ $G(T)$ belongs to $C\left(\sigma _{A}\right),$ which is the space of continuous complex-valued functions on $\sigma _{A}.$ The range of $G(T)$ is the spectrum $\sigma (T)$ and that the spectral radius is equal to $\max \left\{|G(T)(h)|:h\in \sigma _{A}\right\},$ which is $\leq \|T\|.$[4]
Theorem[5] — Suppose $A$ is a closed normal subalgebra of ${\mathcal {B}}(H)$ that contains the identity operator $\operatorname {Id} _{H}$ and let $\sigma =\sigma _{A}$ be the maximal ideal space of $A.$ Let $\Omega $ be the Borel subsets of $\sigma .$ For every $T$ in $A,$ let $G(T):\sigma _{A}\to \mathbb {C} $ denote the Gelfand transform of $T$ so that $G$ is an injective map $G:A\to C\left(\sigma _{A}\right).$ There exists a unique resolution of identity $\pi :\Omega \to A$ :\Omega \to A} that satisfies:
$\langle Tx,y\rangle =\int _{\sigma _{A}}G(T)\operatorname {d} \pi _{x,y}\quad {\text{ for all }}x,y\in H{\text{ and all }}T\in A;$
the notation $T=\int _{\sigma _{A}}G(T)\operatorname {d} \pi $ is used to summarize this situation. Let $I:\operatorname {Im} G\to A$ be the inverse of the Gelfand transform $G:A\to C\left(\sigma _{A}\right)$ where $\operatorname {Im} G$ can be canonically identified as a subspace of $L^{\infty }(\pi ).$ Let $B$ be the closure (in the norm topology of ${\mathcal {B}}(H)$) of the linear span of $\operatorname {Im} \pi .$ Then the following are true:
1. $B$ is a closed subalgebra of ${\mathcal {B}}(H)$ containing $A.$
2. There exists a (linear multiplicative) isometric *-isomorphism $\Phi :L^{\infty }(\pi )\to B$ extending $I:\operatorname {Im} G\to A$ such that $\Phi f=\int _{\sigma _{A}}f\operatorname {d} \pi $ for all $f\in L^{\infty }(\pi ).$
• Recall that the notation $\Phi f=\int _{\sigma _{A}}f\operatorname {d} \pi $ means that $\langle (\Phi f)x,y\rangle =\int _{\sigma _{A}}f\operatorname {d} \pi _{x,y}$ for all $x,y\in H$;
• Note in particular that $T=\int _{\sigma _{A}}G(T)\operatorname {d} \pi =\Phi (G(T))$ for all $T\in A.$
• Explicitly, $\Phi $ satisfies $\Phi \left({\overline {f}}\right)=(\Phi f)^{*}$ and $\|\Phi f\|=\|f\|^{\infty }$ for every $f\in L^{\infty }(\pi )$ (so if $f$ is real valued then $\Phi (f)$ is self-adjoint).
3. If $\omega \subseteq \sigma _{A}$ is open and nonempty (which implies that $\omega \in \Omega $) then $\pi (\omega )\neq 0.$
4. A bounded linear operator $S\in {\mathcal {B}}(H)$ commutes with every element of $A$ if and only if it commutes with every element of $\operatorname {Im} \pi .$
The above result can be specialized to a single normal bounded operator.
See also
• Projection-valued measure – Mathematical operator-value measure of interest in quantum mechanics and functional analysis
• Spectral theory of compact operators
• Spectral theorem – Result about when a matrix can be diagonalized
References
1. Rudin, Walter (1991). Functional Analysis (2nd ed.). New York: McGraw Hill. pp. 292–293. ISBN 0-07-100944-2.
2. Rudin 1991, pp. 316–318.
3. Rudin 1991, pp. 318–321.
4. Rudin 1991, p. 280.
5. Rudin 1991, pp. 321–325.
• Robertson, A. P. (1973). Topological vector spaces. Cambridge England: University Press. ISBN 0-521-29882-2. OCLC 589250.
• Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Wikipedia
|
Spectral theory of ordinary differential equations
In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.
Introduction
Spectral theory for second order ordinary differential equations on a compact interval was developed by Jacques Charles François Sturm and Joseph Liouville in the nineteenth century and is now known as Sturm–Liouville theory. In modern language, it is an application of the spectral theorem for compact operators due to David Hilbert. In his dissertation, published in 1910, Hermann Weyl extended this theory to second order ordinary differential equations with singularities at the endpoints of the interval, now allowed to be infinite or semi-infinite. He simultaneously developed a spectral theory adapted to these special operators and introduced boundary conditions in terms of his celebrated dichotomy between limit points and limit circles.
In the 1920s, John von Neumann established a general spectral theorem for unbounded self-adjoint operators, which Kunihiko Kodaira used to streamline Weyl's method. Kodaira also generalised Weyl's method to singular ordinary differential equations of even order and obtained a simple formula for the spectral measure. The same formula had also been obtained independently by E. C. Titchmarsh in 1946 (scientific communication between Japan and the United Kingdom had been interrupted by World War II). Titchmarsh had followed the method of the German mathematician Emil Hilb, who derived the eigenfunction expansions using complex function theory instead of operator theory. Other methods avoiding the spectral theorem were later developed independently by Levitan, Levinson and Yoshida, who used the fact that the resolvent of the singular differential operator could be approximated by compact resolvents corresponding to Sturm–Liouville problems for proper subintervals. Another method was found by Mark Grigoryevich Krein; his use of direction functionals was subsequently generalised by Izrail Glazman to arbitrary ordinary differential equations of even order.
Weyl applied his theory to Carl Friedrich Gauss's hypergeometric differential equation, thus obtaining a far-reaching generalisation of the transform formula of Gustav Ferdinand Mehler (1881) for the Legendre differential equation, rediscovered by the Russian physicist Vladimir Fock in 1943, and usually called the Mehler–Fock transform. The corresponding ordinary differential operator is the radial part of the Laplacian operator on 2-dimensional hyperbolic space. More generally, the Plancherel theorem for SL(2,R) of Harish Chandra and Gelfand–Naimark can be deduced from Weyl's theory for the hypergeometric equation, as can the theory of spherical functions for the isometry groups of higher dimensional hyperbolic spaces. Harish Chandra's later development of the Plancherel theorem for general real semisimple Lie groups was strongly influenced by the methods Weyl developed for eigenfunction expansions associated with singular ordinary differential equations. Equally importantly the theory also laid the mathematical foundations for the analysis of the Schrödinger equation and scattering matrix in quantum mechanics.
Solutions of ordinary differential equations
Main article: Ordinary differential equations
Reduction to standard form
Let D be the second order differential operator on (a, b) given by
$Df(x)=-p(x)f''(x)+r(x)f'(x)+q(x)f(x),$
where p is a strictly positive continuously differentiable function and q and r are continuous real-valued functions.
For x0 in (a, b), define the Liouville transformation ψ by
$\psi (x)=\int _{x_{0}}^{x}p(t)^{-1/2}\,dt$
If
$U:L^{2}(a,b)\mapsto L^{2}(\psi (a),\psi (b))$
is the unitary operator defined by
$(Uf)(\psi (x))=f(x)\times \left(\psi '(x)\right)^{-1/2},\ \ \forall x\in (a,b)$
then
$U{\frac {\mathrm {d} }{\mathrm {d} x}}U^{-1}g=g'\psi '+{\frac {1}{2}}g{\frac {\psi ''}{\psi '}}$
and
${\begin{aligned}U{\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}U^{-1}g&=\left(U{\frac {\mathrm {d} }{\mathrm {d} x}}U^{-1}\right)\times \left(U{\frac {\mathrm {d} }{\mathrm {d} x}}U^{-1}\right)g\\[1ex]&={\frac {\mathrm {d} }{\mathrm {d} \psi }}\left[g'\psi '+{\frac {1}{2}}g{\frac {\psi ''}{\psi '}}\right]\cdot \psi '+{\frac {1}{2}}\left[g'\psi '+{\frac {1}{2}}g{\frac {\psi ''}{\psi '}}\right]\cdot {\frac {\psi ''}{\psi '}}\\[1ex]&=g''\psi '^{2}+2g'\psi ''+{\frac {1}{2}}g\cdot \left[{\frac {\psi '''}{\psi '}}-{\frac {1}{2}}{\frac {\psi ''^{2}}{\psi '^{2}}}\right]\end{aligned}}$
Hence,
$UDU^{-1}g=-g''+Rg'+Qg,$
where
$R={\frac {p'+r}{p^{1/2}}}$
and
$Q=q-{\frac {rp'}{4p}}+{\frac {p''}{4}}-{\frac {5p'^{2}}{16p}}$
The term in g′ can be removed using an Euler integrating factor. If S′/S = −R/2, then h = Sg satisfies
$(SUDU^{-1}S^{-1})h=-h''+Vh,$
where the potential V is given by
$V=Q+{\frac {S''}{S}}$
The differential operator can thus always be reduced to one of the form [1]
$Df=-f''+qf.$
Existence theorem
The following is a version of the classical Picard existence theorem for second order differential equations with values in a Banach space E.[2]
Let α, β be arbitrary elements of E, A a bounded operator on E and q a continuous function on [a, b].
Then, for c = a or c = b, the differential equation
$Df=Af$
has a unique solution f in C2([a,b], E) satisfying the initial conditions
$f(c)=\beta \,,\;f'(c)=\alpha .$
In fact a solution of the differential equation with these initial conditions is equivalent to a solution of the integral equation
$f=h+Tf$
with T the bounded linear map on C([a,b], E) defined by
$Tf(x)=\int _{c}^{x}K(x,y)f(y)\,dy,$
where K is the Volterra kernel
$K(x,t)=(x-t)(q(t)-A)$
and
$h(x)=\alpha (x-c)+\beta .$
Since ‖Tk‖ tends to 0, this integral equation has a unique solution given by the Neumann series
$f=(I-T)^{-1}h=h+Th+T^{2}h+T^{3}h+\cdots $
This iterative scheme is often called Picard iteration after the French mathematician Charles Émile Picard.
Fundamental eigenfunctions
If f is twice continuously differentiable (i.e. C2) on (a, b) satisfying Df = λf, then f is called an eigenfunction of L with eigenvalue λ.
• In the case of a compact interval [a, b] and q continuous on [a, b], the existence theorem implies that for c = a or c = b and every complex number λ there a unique C2 eigenfunction fλ on [a, b] with fλ(c) and f′λ(c) prescribed. Moreover, for each x in [a, b], fλ(x) and f′λ(x) are holomorphic functions of λ.
• For an arbitrary interval (a, b) and q continuous on (a, b), the existence theorem implies that for c in (a, b) and every complex number λ there a unique C2 eigenfunction fλ on (a, b) with fλ(c) and f′λ(c) prescribed. Moreover, for each x in (a, b), fλ(x) and f′λ(x) are holomorphic functions of λ.
Green's formula
If f and g are C2 functions on (a, b), the Wronskian W(f, g) is defined by
$W(f,g)(x)=f(x)g'(x)-f'(x)g(x).$
Green's formula - which in this one-dimensional case is a simple integration by parts - states that for x, y in (a, b)
$\int _{x}^{y}(Df)g-f(Dg)\,dt=W(f,g)(y)-W(f,g)(x).$
When q is continuous and f, g are C2 on the compact interval [a, b], this formula also holds for x = a or y = b.
When f and g are eigenfunctions for the same eigenvalue, then
${\frac {d}{dx}}W(f,g)=0,$
so that W(f, g) is independent of x.
Classical Sturm–Liouville theory
Main article: Sturm–Liouville theory
Let [a, b] be a finite closed interval, q a real-valued continuous function on [a, b] and let H0 be the space of C2 functions f on [a, b] satisfying the Robin boundary conditions
${\begin{cases}\cos \alpha \,f(a)-\sin \alpha \,f'(a)=0,\\[0.5ex]\cos \beta \,f(b)-\sin \beta \,f'(b)=0,\end{cases}}$
with inner product
$(f,g)=\int _{a}^{b}f(x){\overline {g(x)}}\,dx.$
In practice usually one of the two standard boundary conditions:
• Dirichlet boundary condition f(c) = 0
• Neumann boundary condition f′(c) = 0
is imposed at each endpoint c = a, b.
The differential operator D given by
$Df=-f''+qf$
acts on H0. A function f in H0 is called an eigenfunction of D (for the above choice of boundary values) if Df = λ f for some complex number λ, the corresponding eigenvalue. By Green's formula, D is formally self-adjoint on H0, since the Wronskian W(f, g) vanishes if both f, g satisfy the boundary conditions:
$(Df,g)=(f,Dg),\quad {\text{ for }}f,g\in H_{0}.$
As a consequence, exactly as for a self-adjoint matrix in finite dimensions,
• the eigenvalues of D are real;
• the eigenspaces for distinct eigenvalues are orthogonal.
It turns out that the eigenvalues can be described by the maximum-minimum principle of Rayleigh–Ritz[3] (see below). In fact it is easy to see a priori that the eigenvalues are bounded below because the operator D is itself bounded below on H0:
$(Df,f)\geq M(f,f)$ for some finite (possibly negative) constant $M$.
In fact, integrating by parts,
$(Df,f)=\left[-f'{\overline {f}}\right]_{a}^{b}+\int |f'|^{2}+\int q|f|^{2}.$
For Dirichlet or Neumann boundary conditions, the first term vanishes and the inequality holds with M = inf q.
For general Robin boundary conditions the first term can be estimated using an elementary Peter-Paul version of Sobolev's inequality:
"Given ε > 0, there is constant R > 0 such that |f(x)|2 ≤ ε (f′, f′) + R (f, f) for all f in C1[a, b]."
In fact, since
$|f(b)-f(x)|\leq (b-a)^{1/2}\cdot \|f'\|_{2},$
only an estimate for f(b) is needed and this follows by replacing f(x) in the above inequality by (x − a)n·(b − a)−n·f(x) for n sufficiently large.
Green's function (regular case)
From the theory of ordinary differential equations, there are unique fundamental eigenfunctions φλ(x), χλ(x) such that
• D φλ = λ φλ, φλ(a) = sin α, φλ'(a) = cos α
• D χλ = λ χλ, χλ(b) = sin β, χλ'(b) = cos β
which at each point, together with their first derivatives, depend holomorphically on λ. Let
$\omega (\lambda )=W(\phi _{\lambda },\chi _{\lambda }),$
be an entire holomorphic function.
This function ω(λ) plays the role of the characteristic polynomial of D. Indeed, the uniqueness of the fundamental eigenfunctions implies that its zeros are precisely the eigenvalues of D and that each non-zero eigenspace is one-dimensional. In particular there are at most countably many eigenvalues of D and, if there are infinitely many, they must tend to infinity. It turns out that the zeros of ω(λ) also have mutilplicity one (see below).
If λ is not an eigenvalue of D on H0, define the Green's function by
$G_{\lambda }(x,y)={\begin{cases}\phi _{\lambda }(x)\chi _{\lambda }(y)/\omega (\lambda )&{\text{ for }}x\geq y\\[1ex]\chi _{\lambda }(x)\phi _{\lambda }(y)/\omega (\lambda )&{\text{ for }}y\geq x.\end{cases}}$
This kernel defines an operator on the inner product space C[a,b] via
$(G_{\lambda }f)(x)=\int _{a}^{b}G_{\lambda }(x,y)f(y)\,dy.$
Since Gλ(x,y) is continuous on [a, b] × [a, b], it defines a Hilbert–Schmidt operator on the Hilbert space completion H of C[a, b] = H1 (or equivalently of the dense subspace H0), taking values in H1. This operator carries H1 into H0. When λ is real, Gλ(x,y) = Gλ(y,x) is also real, so defines a self-adjoint operator on H. Moreover,
• Gλ (D − λ) = I on H0
• Gλ carries H1 into H0, and (D − λ) Gλ = I on H1.
Thus the operator Gλ can be identified with the resolvent (D − λ)−1.
Spectral theorem
Theorem — The eigenvalues of D are real of multiplicity one and form an increasing sequence λ1 < λ2 < ⋯ tending to infinity.
The corresponding normalised eigenfunctions form an orthonormal basis of H0.
The k-th eigenvalue of D is given by the minimax principle
$\lambda _{k}=\max _{\dim G=k-1}\,\min _{f\perp G}{(Df,f) \over (f,f)}.$
In particular if q1 ≤ q2, then
$\lambda _{k}(D_{1})\leq \lambda _{k}(D_{2}).$
In fact let T = Gλ for λ large and negative. Then T defines a compact self-adjoint operator on the Hilbert space H. By the spectral theorem for compact self-adjoint operators, H has an orthonormal basis consisting of eigenvectors ψn of T with Tψn = μn ψn, where μn tends to zero. The range of T contains H0 so is dense. Hence 0 is not an eigenvalue of T. The resolvent properties of T imply that ψn lies in H0 and that
$D\psi _{n}=\left(\lambda +{\frac {1}{\mu _{n}}}\right)\psi _{n}$
The minimax principle follows because if
$\lambda (G)=\min _{f\perp G}{\frac {(Df,f)}{(f,f)}},$
then λ(G) = λk for the linear span of the first k − 1 eigenfunctions. For any other (k − 1)-dimensional subspace G, some f in the linear span of the first k eigenvectors must be orthogonal to G. Hence λ(G) ≤ (Df,f)/(f,f) ≤ λk.
Wronskian as a Fredholm determinant
For simplicity, suppose that m ≤ q(x) ≤ M on [0, π] with Dirichlet boundary conditions. The minimax principle shows that
$n^{2}+m\leq \lambda _{n}(D)\leq n^{2}+M.$
It follows that the resolvent (D − λ)−1 is a trace-class operator whenever λ is not an eigenvalue of D and hence that the Fredholm determinant det I − μ(D − λ)−1 is defined.
The Dirichlet boundary conditions imply that
$\omega (\lambda )=\phi _{\lambda }(b).$
Using Picard iteration, Titchmarsh showed that φλ(b), and hence ω(λ), is an entire function of finite order 1/2:
$\omega (\lambda )={\mathcal {O}}\left(e^{\sqrt {|\lambda |}}\right)$
At a zero μ of ω(λ), φμ(b) = 0. Moreover,
$\psi (x)=\partial _{\lambda }\varphi _{\lambda }(x)|_{\lambda =\mu }$
satisfies (D − μ)ψ = φμ. Thus
$\omega (\lambda )=(\lambda -\mu )\psi (b)+{\mathcal {O}}((\lambda -\mu )^{2})$
This implies that[4]
μ is a simple zero of ω(λ).
For otherwise ψ(b) = 0, so that ψ would have to lie in H0. But then
$(\phi _{\mu },\phi _{\mu })=((D-\mu )\psi ,\phi _{\mu })=(\psi ,(D-\mu )\phi _{\mu })=0,$
a contradiction.
On the other hand, the distribution of the zeros of the entire function ω(λ) is already known from the minimax principle.
By the Hadamard factorization theorem, it follows that[5]
$\omega (\lambda )=C\prod (1-\lambda /\lambda _{n}),$
for some non-zero constant C.
Hence
$\det(I-\mu (D-\lambda )^{-1})=\prod \left(1-{\mu \over \lambda _{n}-\lambda }\right)=\prod {1-(\lambda +\mu )/\lambda _{n} \over 1-\lambda /\lambda _{n}}={\omega (\lambda +\mu ) \over \omega (\lambda )}.$
In particular if 0 is not an eigenvalue of D
$\omega (\mu )=\omega (0)\cdot \det(I-\mu D^{-1}).$
Tools from abstract spectral theory
Functions of bounded variation
See also: Bounded variation, Lebesgue–Stieltjes integration, and Riesz representation theorem
A function ρ(x) of bounded variation[6] on a closed interval [a, b] is a complex-valued function such that its total variation V(ρ), the supremum of the variations
$\sum _{r=0}^{k-1}|\rho (x_{r+1})-\rho (x_{r})|$
over all dissections
$a=x_{0}<x_{1}<\dots <x_{k}=b$
is finite. The real and imaginary parts of ρ are real-valued functions of bounded variation. If ρ is real-valued and normalised so that ρ(a) = 0, it has a canonical decomposition as the difference of two bounded non-decreasing functions:
$\rho (x)=\rho _{+}(x)-\rho _{-}(x),$
where ρ+(x) and ρ–(x) are the total positive and negative variation of ρ over [a, x].
If f is a continuous function on [a, b] its Riemann–Stieltjes integral with respect to ρ
$\int _{a}^{b}f(x)\,d\rho (x)$
is defined to be the limit of approximating sums
$\sum _{r=0}^{k-1}f(x_{r})(\rho (x_{r+1})-\rho (x_{r}))$
as the mesh of the dissection, given by sup |xr+1 − xr|, tends to zero.
This integral satisfies
$\left|\int _{a}^{b}f(x)\,d\rho (x)\right|\leq V(\rho )\cdot \|f\|_{\infty }$
and thus defines a bounded linear functional dρ on C[a, b] with norm ‖dρ‖ = V(ρ).
Every bounded linear functional μ on C[a, b] has an absolute value |μ| defined for non-negative f by[7]
$|\mu |(f)=\sup _{0\leq |g|\leq f}|\mu (g)|.$
The form |μ| extends linearly to a bounded linear form on C[a, b] with norm ‖μ‖ and satisfies the characterizing inequality
$|\mu (f)|\leq |\mu |(|f|)$
for f in C[a, b]. If μ is real, i.e. is real-valued on real-valued functions, then
$\mu =|\mu |-(|\mu |-\mu )\equiv \mu _{+}-\mu _{-}$
gives a canonical decomposition as a difference of positive forms, i.e. forms that are non-negative on non-negative functions.
Every positive form μ extends uniquely to the linear span of non-negative bounded lower semicontinuous functions g by the formula[8]
$\mu (g)=\lim \mu (f_{n}),$
where the non-negative continuous functions fn increase pointwise to g.
The same therefore applies to an arbitrary bounded linear form μ, so that a function ρ of bounded variation may be defined by[9]
$\rho (x)=\mu (\chi _{[a,x]}),$
where χA denotes the characteristic function of a subset A of [a, b]. Thus μ = dρ and ‖μ‖ = ‖dρ‖. Moreover μ+ = dρ+ and μ– = dρ–.
This correspondence between functions of bounded variation and bounded linear forms is a special case of the Riesz representation theorem.
The support of μ = dρ is the complement of all points x in [a, b] where ρ is constant on some neighborhood of x; by definition it is a closed subset A of [a, b]. Moreover, μ((1 − χA)f) = 0, so that μ(f) = 0 if f vanishes on A.
Spectral measure
See also: Spectral theorem and Projection-valued measure
Let H be a Hilbert space and $T$ a self-adjoint bounded operator on H with $0\leq T\leq I$, so that the spectrum $\sigma (T)$ of $T$ is contained in $[0,1]$. If $p(t)$ is a complex polynomial, then by the spectral mapping theorem
$\sigma (p(T))=p(\sigma (T))$
and hence
$\|p(T)\|\leq \|p\|_{\infty }$
where $\|\cdot \|_{\infty }$ denotes the uniform norm on C[0, 1]. By the Weierstrass approximation theorem, polynomials are uniformly dense in C[0, 1]. It follows that $f(T)$ can be defined $\forall f\in C[0,1]$, with
$\sigma (f(T))=f(\sigma (T))$
and
$\|f(T)\|\leq \|f\|_{\infty }.$
If $0\leq g\leq 1$ is a lower semicontinuous function on [0, 1], for example the characteristic function $\chi _{[0,\alpha ]}$ of a subinterval of [0, 1], then $g$ is a pointwise increasing limit of non-negative $f_{n}\in C[0,1]$.
According to Szőkefalvi-Nagy,[10] if χ is a vector in H, then the vectors
$\eta _{n}=f_{n}(T)\xi $
form a Cauchy sequence in H, since, for $n\geq m$,
$\|\eta _{n}-\eta _{m}\|^{2}\leq (\eta _{n},\xi )-(\eta _{m},\xi ),$
and $(\eta _{n},\xi )=(f_{n}(T)\xi ,\xi )$ is bounded and increasing, so has a limit.
It follows that $g(T)$ can be defined by[lower-alpha 1]
$g(T)\xi =\lim f_{n}(T)\xi .$
If χ and η are vectors in H, then
$\mu _{\xi ,\eta }(f)=(f(T)\xi ,\eta )$
defines a bounded linear form $\mu _{\xi ,\eta }$ on H. By the Riesz representation theorem
$\mu _{\xi ,\eta }=d\rho _{\xi ,\eta }$
for a unique normalised function $\rho _{\xi ,\eta }$ of bounded variation on [0, 1].
$d\rho _{\xi ,\eta }$ (or sometimes slightly incorrectly $\rho _{\xi ,\eta }$ itself) is called the spectral measure determined by χ and η.
The operator $g(T)$ is accordingly uniquely characterised by the equation
$(g(T)\xi ,\eta )=\mu _{\xi ,\eta }(g)=\int _{0}^{1}g(\lambda )\,d\rho _{\xi ,\eta }(\lambda ).$
The spectral projection $E(\lambda )$ is defined by
$E(\lambda )=\chi _{[0,\lambda ]}(T),$
so that
$\rho _{\xi ,\eta }(\lambda )=(E(\lambda )\xi ,\eta ).$
It follows that
$g(T)=\int _{0}^{1}g(\lambda )\,dE(\lambda ),$
which is understood in the sense that for any vectors $\xi $ and $\eta $,
$(g(T)\xi ,\eta )=\int _{0}^{1}g(\lambda )\,d(E(\lambda )\xi ,\eta )=\int _{0}^{1}g(\lambda )\,d\rho _{\xi ,\eta }(\lambda ).$
For a single vector $\xi ,\,\mu _{\xi }=\mu _{\xi ,\xi }$ is a positive form on [0, 1] (in other words proportional to a probability measure on [0, 1]) and $\rho _{\xi }=\rho _{\xi ,\xi }$ is non-negative and non-decreasing. Polarisation shows that all the forms $\mu _{\xi ,\eta }$ can naturally be expressed in terms of such positive forms, since
$\mu _{\xi ,\eta }={\frac {1}{4}}\left(\mu _{\xi +\eta }+i\mu _{\xi +i\eta }-\mu _{\xi -\eta }-i\mu _{\xi -i\eta }\right)$
If the vector $\xi $ is such that the linear span of the vectors $(T^{n}\xi )$ is dense in H, i.e. $\xi $ is a cyclic vector for $T$, then the map $U$ defined by
$U(f)=f(T)\xi ,\,C[0,1]\rightarrow H$
satisfies
$(Uf_{1},Uf_{2})=\int _{0}^{1}f_{1}(\lambda ){\overline {f_{2}(\lambda )}}\,d\rho _{\xi }(\lambda ).$
Let $L_{2}([0,1],d\rho _{\xi })$ denote the Hilbert space completion of $C[0,1]$ associated with the possibly degenerate inner product on the right hand side.[lower-alpha 2] Thus $U$ extends to a unitary transformation of $L_{2}([0,1],\rho _{\xi })$ onto H. $UTU^{\ast }$ is then just multiplication by $\lambda $ on $L_{2}([0,1],d\rho _{\xi })$; and more generally $Uf(T)U^{\ast }$ is multiplication by $f(\lambda )$. In this case, the support of $d\rho _{\xi }$ is exactly $\sigma (T)$, so that
the self-adjoint operator becomes a multiplication operator on the space of functions on its spectrum with inner product given by the spectral measure.
Weyl–Titchmarsh–Kodaira theory
The eigenfunction expansion associated with singular differential operators of the form
$Df=-(pf')'+qf$
on an open interval (a, b) requires an initial analysis of the behaviour of the fundamental eigenfunctions near the endpoints a and b to determine possible boundary conditions there. Unlike the regular Sturm–Liouville case, in some circumstances spectral values of D can have multiplicity 2. In the development outlined below standard assumptions will be imposed on p and q that guarantee that the spectrum of D has multiplicity one everywhere and is bounded below. This includes almost all important applications; modifications required for the more general case will be discussed later.
Having chosen the boundary conditions, as in the classical theory the resolvent of D, (D + R)−1 for R large and positive, is given by an operator T corresponding to a Green's function constructed from two fundamental eigenfunctions. In the classical case T was a compact self-adjoint operator; in this case T is just a self-adjoint bounded operator with 0 ≤ T ≤ I. The abstract theory of spectral measure can therefore be applied to T to give the eigenfunction expansion for D.
The central idea in the proof of Weyl and Kodaira can be explained informally as follows. Assume that the spectrum of D lies in [1, ∞) and that T = D−1 and let
$E(\lambda )=\chi _{[\lambda ^{-1},1]}(T)$
be the spectral projection of D corresponding to the interval [1, λ]. For an arbitrary function f define
$f(x,\lambda )=(E(\lambda )f)(x).$
f(x, λ) may be regarded as a differentiable map into the space of functions of bounded variation ρ; or equivalently as a differentiable map
$x\mapsto (d_{\lambda }f)(x)$
into the Banach space E of bounded linear functionals dρ on C[α,β] whenever [α, β] is a compact subinterval of [1, ∞).
Weyl's fundamental observation was that dλ f satisfies a second order ordinary differential equation taking values in E:
$D(d_{\lambda }f)=\lambda \cdot d_{\lambda }f.$
After imposing initial conditions on the first two derivatives at a fixed point c, this equation can be solved explicitly in terms of the two fundamental eigenfunctions and the "initial value" functionals
$(d_{\lambda }f)(c)=d_{\lambda }f(c,\cdot ),\quad (d_{\lambda }f)^{\prime }(c)=d_{\lambda }f_{x}(c,\cdot ).$
This point of view may now be turned on its head: f(c, λ) and fx(c, λ) may be written as
$f(c,\lambda )=(f,\xi _{1}(\lambda )),\quad f_{x}(c,\lambda )=(f,\xi _{2}(\lambda )),$
where ξ1(λ) and ξ2(λ) are given purely in terms of the fundamental eigenfunctions. The functions of bounded variation
$\sigma _{ij}(\lambda )=(\xi _{i}(\lambda ),\xi _{j}(\lambda ))$
determine a spectral measure on the spectrum of D and can be computed explicitly from the behaviour of the fundamental eigenfunctions (the Titchmarsh–Kodaira formula).
Limit circle and limit point for singular equations
Let q(x) be a continuous real-valued function on (0, ∞) and let D be the second order differential operator
$Df=-f''+qf$
on (0, ∞). Fix a point c in (0, ∞) and, for complex λ, let $\varphi _{\lambda },\theta _{\lambda }$ be the unique fundamental eigenfunctions of D on (0, ∞) satisfying
$(D-\lambda )\varphi _{\lambda }=0,\quad (D-\lambda )\theta _{\lambda }=0$
together with the initial conditions at c
$\varphi _{\lambda }(c)=1,\,\varphi _{\lambda }'(c)=0,\,\theta _{\lambda }(c)=0,\,\theta _{\lambda }'(c)=1.$
Then their Wronskian satisfies
$W(\varphi _{\lambda },\theta _{\lambda })=\varphi _{\lambda }\theta _{\lambda }'-\theta _{\lambda }\varphi _{\lambda }'\equiv 1,$
since it is constant and equal to 1 at c.
Let λ be non-real and 0 < x < ∞. If the complex number $\mu $ is such that $f=\varphi +\mu \theta $ satisfies the boundary condition $\cos \beta \,f(x)-\sin \beta \,f'(x)=0$ for some $\beta $ (or, equivalently, $f'(x)/f(x)$ is real) then, using integration by parts, one obtains
$\operatorname {Im} (\lambda )\int _{c}^{x}|\varphi +\mu \theta |^{2}=\operatorname {Im} (\mu ).$
Therefore, the set of μ satisfying this equation is not empty. This set is a circle in the complex μ-plane. Points μ in its interior are characterized by
$\int _{c}^{x}|\varphi +\mu \theta |^{2}<{\operatorname {Im} (\mu ) \over \operatorname {Im} (\lambda )}$
if x > c and by
$\int _{x}^{c}|\varphi +\mu \theta |^{2}<{\operatorname {Im} (\mu ) \over \operatorname {Im} (\lambda )}$
if x < c.
Let Dx be the closed disc enclosed by the circle. By definition these closed discs are nested and decrease as x approaches 0 or ∞. So in the limit, the circles tend either to a limit circle or a limit point at each end. If $\mu $ is a limit point or a point on the limit circle at 0 or ∞, then $f=\varphi +\mu \theta $ is square integrable (L2) near 0 or ∞, since $\mu $ lies in Dx for all x > c (in the ∞ case) and so $\int _{c}^{x}|\varphi +\mu \theta |^{2}<{\operatorname {Im} (\mu ) \over \operatorname {Im} (\lambda )}$ is bounded independent of x. In particular:[11]
• there are always non-zero solutions of Df = λf which are square integrable near 0 resp. ∞;
• in the limit circle case all solutions of Df = λf are square integrable near 0 resp. ∞.
The radius of the disc Dx can be calculated to be
$\left|{1 \over {2\operatorname {Im} (\lambda )\int _{c}^{x}|\theta |^{2}}}\right|$
and this implies that in the limit point case $\theta $ cannot be square integrable near 0 resp. ∞. Therefore, we have a converse to the second statement above:
• in the limit point case there is exactly one non-zero solution (up to scalar multiples) of Df = λf which is square integrable near 0 resp. ∞.
On the other hand, if Dg = λ′ g for another value λ′, then
$h(x)=g(x)-(\lambda ^{\prime }-\lambda )\int _{c}^{x}(\varphi _{\lambda }(x)\theta _{\lambda }(y)-\theta _{\lambda }(x)\varphi _{\lambda }(y))g(y)\,dy$
satisfies Dh = λh, so that
$g(x)=c_{1}\varphi _{\lambda }+c_{2}\theta _{\lambda }+(\lambda ^{\prime }-\lambda )\int _{c}^{x}(\varphi _{\lambda }(x)\theta _{\lambda }(y)-\theta _{\lambda }(x)\varphi _{\lambda }(y))g(y)\,dy.$
This formula may also be obtained directly by the variation of constant method from (D − λ)g = (λ′ − λ)g. Using this to estimate g, it follows that[11]
• the limit point/limit circle behaviour at 0 or ∞ is independent of the choice of λ.
More generally if Dg = (λ – r) g for some function r(x), then[12]
$g(x)=c_{1}\varphi _{\lambda }+c_{2}\theta _{\lambda }-\int _{c}^{x}(\varphi _{\lambda }(x)\theta _{\lambda }(y)-\theta _{\lambda }(x)\varphi _{\lambda }(y))r(y)g(y)\,dy.$
From this it follows that[12]
• if r is continuous at 0, then D + r is limit point or limit circle at 0 precisely when D is,
so that in particular[13]
• if q(x) − a/x2 is continuous at 0, then D is limit point at 0 if and only if a ≥ 3/4.
Similarly
• if r has a finite limit at ∞, then D + r is limit point or limit circle at ∞ precisely when D is,
so that in particular[14]
• if q has a finite limit at ∞, then D is limit point at ∞.
Many more elaborate criteria to be limit point or limit circle can be found in the mathematical literature.
Green's function (singular case)
Consider the differential operator
$D_{0}f=-(p_{0}f')'+q_{0}f$
on (0, ∞) with q0 positive and continuous on (0, ∞) and p0 continuously differentiable in [0, ∞), positive in (0, ∞) and p0(0) = 0.
Moreover, assume that after reduction to standard form D0 becomes the equivalent operator
$Df=-f''+qf$
on (0, ∞) where q has a finite limit at ∞. Thus
• D is limit point at ∞.
At 0, D may be either limit circle or limit point. In either case there is an eigenfunction Φ0 with DΦ0 = 0 and Φ0 square integrable near 0. In the limit circle case, Φ0 determines a boundary condition at 0:
$W(f,\Phi _{0})(0)=0.$
For complex λ, let Φλ and Χλ satisfy
• (D – λ)Φλ = 0, (D – λ)Χλ = 0
• Χλ square integrable near infinity
• Φλ square integrable at 0 if 0 is limit point
• Φλ satisfies the boundary condition above if 0 is limit circle.
Let
$\omega (\lambda )=W(\Phi _{\lambda },\mathrm {X} _{\lambda }),$
a constant which vanishes precisely when Φλ and Χλ are proportional, i.e. λ is an eigenvalue of D for these boundary conditions.
On the other hand, this cannot occur if Im λ ≠ 0 or if λ is negative.[11]
Indeed, if D f = λf with q0 – λ ≥ δ > 0, then by Green's formula (Df,f) = (f,Df), since W(f,f*) is constant. So λ must be real. If f is taken to be real-valued in the D0 realization, then for 0 < x < y
$[p_{0}ff']_{x}^{y}=\int _{x}^{y}(q_{0}-\lambda )|f|^{2}+p_{0}(f')^{2}.$
Since p0(0) = 0 and f is integrable near 0, p0f f′ must vanish at 0. Setting x = 0, it follows that f(y) f′(y) > 0, so that f2 is increasing, contradicting the square integrability of f near ∞.
Thus, adding a positive scalar to q, it may be assumed that
$\omega (\lambda )\neq 0~~{\text{ if }}\lambda \notin [1,\infty ).$
If ω(λ) ≠ 0, the Green's function Gλ(x,y) at λ is defined by
$G_{\lambda }(x,y)={\begin{cases}\Phi _{\lambda }(x)\mathrm {X} _{\lambda }(y)/\omega (\lambda )&(x\leq y),\\[1ex]\mathrm {X} _{\lambda }(x)\Phi _{\lambda }(y)/\omega (\lambda )&(x\geq y).\end{cases}}$
and is independent of the choice of Φλ and Χλ.
In the examples there will be a third "bad" eigenfunction Ψλ defined and holomorphic for λ not in [1, ∞) such that Ψλ satisfies the boundary conditions at neither 0 nor ∞. This means that for λ not in [1, ∞)
• W(Φλ,Ψλ) is nowhere vanishing;
• W(Χλ,Ψλ) is nowhere vanishing.
In this case Χλ is proportional to Φλ + m(λ) Ψλ, where
$m(\lambda )=-W(\Phi _{\lambda },\mathrm {X} _{\lambda })/W(\Psi _{\lambda },\mathrm {X} _{\lambda }).$
Let H1 be the space of square integrable continuous functions on (0, ∞) and let H0 be
• the space of C2 functions f on (0, ∞) of compact support if D is limit point at 0
• the space of C2 functions f on (0, ∞) with W(f, Φ0) = 0 at 0 and with f = 0 near ∞ if D is limit circle at 0.
Define T = G0 by
$(Tf)(x)=\int _{0}^{\infty }G_{0}(x,y)f(y)\,dy.$
Then T D = I on H0, D T = I on H1 and the operator D is bounded below on H0:
$(Df,f)\geq (f,f).$
Thus T is a self-adjoint bounded operator with 0 ≤ T ≤ I.
Formally T = D−1. The corresponding operators Gλ defined for λ not in [1, ∞) can be formally identified with
$(D-\lambda )^{-1}=T(I-\lambda T)^{-1}$
and satisfy Gλ (D – λ) = I on H0, (D – λ)Gλ = I on H1.
Spectral theorem and Titchmarsh–Kodaira formula
Theorem.[11][15][16] — For every real number λ let ρ(λ) be defined by the Titchmarsh–Kodaira formula:
$\rho (\lambda )=\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}{\frac {1}{\pi }}\int _{\delta }^{\lambda +\delta }\operatorname {Im} m(t+i\varepsilon )\,dt.$
Then ρ(λ) is a lower semicontinuous non-decreasing function of λ and if
$(Uf)(\lambda )=\int _{0}^{\infty }f(x)\Phi (x,\lambda )\,dx,$
then U defines a unitary transformation of L2(0, ∞) onto L2([1,∞), dρ) such that UDU−1 corresponds to multiplication by λ.
The inverse transformation U−1 is given by
$(U^{-1}g)(x)=\int _{1}^{\infty }g(\lambda )\Phi (x,\lambda )\,d\rho (\lambda ).$
The spectrum of D equals the support of dρ.
Kodaira gave a streamlined version[17][18] of Weyl's original proof.[11] (M.H. Stone had previously shown[19] how part of Weyl's work could be simplified using von Neumann's spectral theorem.)
In fact for T =D−1 with 0 ≤ T ≤ I, the spectral projection E(λ) of T is defined by
$E(\lambda )=\chi _{[\lambda ^{-1},1]}(T)$
It is also the spectral projection of D corresponding to the interval [1, λ].
For f in H1 define
$f(x,\lambda )=(E(\lambda )f)(x).$
f(x, λ) may be regarded as a differentiable map into the space of functions ρ of bounded variation; or equivalently as a differentiable map
$x\mapsto (d_{\lambda }f)(x)$
into the Banach space E of bounded linear functionals dρ on [C[α, β]] for any compact subinterval [α, β] of [1, ∞).
The functionals (or measures) dλ f(x) satisfies the following E-valued second order ordinary differential equation:
$D(d_{\lambda }f)=\lambda \cdot d_{\lambda }f,$
with initial conditions at c in (0, ∞)
$(d_{\lambda }f)(c)=d_{\lambda }f(c,\cdot )=\mu ^{(0)},\quad (d_{\lambda }f)^{\prime }(c)=d_{\lambda }f_{x}(c,\cdot )=\mu ^{(1)}.$
If φλ and χλ are the special eigenfunctions adapted to c, then
$d_{\lambda }f(x)=\varphi _{\lambda }(x)\mu ^{(0)}+\chi _{\lambda }(x)\mu ^{(1)}.$
Moreover,
$\mu ^{(k)}=d_{\lambda }(f,\xi _{\lambda }^{(k)}),$
where
$\xi _{\lambda }^{(k)}=DE(\lambda )\eta ^{(k)},$
with
$\eta _{z}^{(0)}(y)=G_{z}(c,y),\,\,\,\,\eta _{z}^{(1)}(x)=\partial _{x}G_{z}(c,y),\,\,\,\,(z\notin [1,\infty )).$
(As the notation suggests, ξλ(0) and ξλ(1) do not depend on the choice of z.)
Setting
$\sigma _{ij}(\lambda )=(\xi _{\lambda }^{(i)},\xi _{\lambda }^{(j)}),$
it follows that
$d_{\lambda }(E(\lambda )\eta _{z}^{(i)},\eta _{z}^{(j)})=|\lambda -z|^{-2}\cdot d_{\lambda }\sigma _{ij}(\lambda ).$
On the other hand, there are holomorphic functions a(λ), b(λ) such that
• φλ + a(λ) χλ is proportional to Φλ;
• φλ + b(λ) χλ is proportional to Χλ.
Since W(φλ, χλ) = 1, the Green's function is given by
$G_{\lambda }(x,y)={\begin{cases}{\dfrac {(\varphi _{\lambda }(x)+a(\lambda )\chi _{\lambda }(x))(\varphi _{\lambda }(y)+b(\lambda )\chi _{\lambda }(y))}{b(\lambda )-a(\lambda )}}&(x\leq y),\\[1ex]{\dfrac {(\varphi _{\lambda }(x)+b(\lambda )\chi _{\lambda }(x))(\varphi _{\lambda }(y)+a(\lambda )\chi _{\lambda }(y))}{b(\lambda )-a(\lambda )}}&(y\leq x).\end{cases}}$
Direct calculation[20] shows that
$(\eta _{z}^{(i)},\eta _{z}^{(j)})=\operatorname {Im} M_{ij}(z)/\operatorname {Im} z,$
where the so-called characteristic matrix Mij(z) is given by
$M_{00}(z)={\frac {a(z)b(z)}{a(z)-b(z)}},\,\,M_{01}(z)=M_{10}(z)={\frac {a(z)+b(z)}{2(a(z)-b(z))}},\,\,M_{11}(z)={\frac {1}{a(z)-b(z)}}.$
Hence
$\int _{-\infty }^{\infty }(\operatorname {Im} z)\cdot |\lambda -z|^{-2}\,d\sigma _{ij}(\lambda )=\operatorname {Im} M_{ij}(z),$
which immediately implies
$\sigma _{ij}(\lambda )=\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}\int _{\delta }^{\lambda +\delta }\operatorname {Im} M_{ij}(t+i\varepsilon )\,dt.$
(This is a special case of the "Stieltjes inversion formula".)
Setting ψλ(0) = φλ and ψλ(1) = χλ, it follows that
$(E(\mu )f)(x)=\sum _{i.j}\int _{0}^{\mu }\int _{0}^{\infty }\psi _{\lambda }^{(i)}(x)\psi _{\lambda }^{(j)}(y)f(y)\,dy\,d\sigma _{ij}(\lambda )=\int _{0}^{\mu }\int _{0}^{\infty }\Phi _{\lambda }(x)\Phi _{\lambda }(y)f(y)\,dy\,d\rho (\lambda ).$
This identity is equivalent to the spectral theorem and Titchmarsh–Kodaira formula.
Application to the hypergeometric equation
See also: Legendre equation and Hypergeometric equation
The Mehler–Fock transform[21][22][23] concerns the eigenfunction expansion associated with the Legendre differential operator D
$Df=-((x^{2}-1)f')'=-(x^{2}-1)f''-2xf'$
on (1, ∞). The eigenfunctions are the Legendre functions[24]
$P_{-1/2+i{\sqrt {\lambda }}}(\cosh r)={1 \over 2\pi }\int _{0}^{2\pi }\left({\sin \theta +ie^{-r}\cos \theta \over \cos \theta -ie^{-r}\sin \theta }\right)^{{1 \over 2}+i{\sqrt {\lambda }}}\,d\theta $
with eigenvalue λ ≥ 0. The two Mehler–Fock transformations are[25]
$Uf(\lambda )=\int _{1}^{\infty }f(x)\,P_{-1/2+i{\sqrt {\lambda }}}(x)\,dx$
and
$U^{-1}g(x)=\int _{0}^{\infty }g(\lambda )\,{1 \over 2}\tanh \pi {\sqrt {\lambda }}\,d\lambda .$
(Often this is written in terms of the variable τ = √λ.)
Mehler and Fock studied this differential operator because it arose as the radial component of the Laplacian on 2-dimensional hyperbolic space. More generally,[26] consider the group G = SU(1,1) consisting of complex matrices of the form
${\begin{bmatrix}\alpha &\beta \\{\overline {\beta }}&{\overline {\alpha }}\end{bmatrix}}$
with determinant |α|2 − |β|2 = 1.
Generalisations and alternative approaches
A Weyl function can be defined at a singular endpoint a giving rise to a singular version of Weyl–Titchmarsh–Kodaira theory.[27] this applies for example to the case of radial Schrödinger operators
$Df=-f''+{\frac {\ell (\ell +1)}{x^{2}}}f+V(x)f,\qquad x\in (0,\infty )$
The whole theory can also be extended to the case where the coefficients are allowed to be measures.[28]
Notes
1. This is a limit in the strong operator topology.
2. A bona fide inner product is defined on the quotient by the subspace of null functions $f$, i.e. those with $\mu _{\xi }(|f|_{2})=0$. Alternatively in this case the support of the measure is $\sigma (T)$, so the right hand side defines a (non-degenerate) inner product on $C(\sigma (T))$.
References
Citations
1. Titchmarsh 1962, p. 22
2. Dieudonné 1969
3. Courant & Hilbert 1989
4. Titchmarsh 1962
5. Titchmarsh 1939
6. Burkill 1951, pp. 50–52
7. Loomis 1953, p. page 40
8. Loomis 1953, pp. 30–31
9. Kolmogorov & Fomin 1975, pp. 374–376
10. Riesz & Szőkefalvi-Nagy 1990, p. 263
11. Weyl 1910 harvnb error: no target: CITEREFWeyl1910 (help).
12. Bellman 1969, p. 116
13. Reed & Simon 1975, p. 159
14. Reed & Simon 1975, p. 154
15. Titchmarsh 1946
16. Kodaira 1949, pp. 935–936
17. Kodaira 1949, pp. 929–932; for omitted details, see Kodaira 1950, pp. 529–536
18. Dieudonné 1988
19. Stone 1932
20. Kodaira 1950, pp. 534–535
21. Mehler 1881.
22. Fock 1943, pp. 253–256
23. Vilenkin 1968
24. Terras 1984, pp. 261–276
25. Lebedev 1972
26. Vilenkin 1968
27. Kostenko, Sakhnovich & Teschl 2012, pp. 1699–1747
28. Eckhardt & Teschl 2013, pp. 151–224
Bibliography
• Akhiezer, Naum Ilich; Glazman, Izrael Markovich (1993), Theory of Linear Operators in Hilbert Space, Dover, ISBN 978-0-486-67748-4
• Bellman, Richard (1969), Stability Theory of Differential Equations, Dover, ISBN 978-0-486-62210-1
• Burkill, J.C. (1951), The Lebesgue Integral, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 40, Cambridge University Press, ISBN 978-0-521-04382-3
• Coddington, Earl A.; Levinson, Norman (1955), Theory of Ordinary Differential equations, McGraw-Hill, ISBN 978-0-07-011542-2
• Courant, Richard; Hilbert, David (1989), Method of Mathematical Physics, Vol. I, Wiley-Interscience, ISBN 978-0-471-50447-4
• Dieudonné, Jean (1969), Treatise on Analysis, Vol. I [Foundations of Modern Analysis], Academic Press, ISBN 978-1-4067-2791-3
• Dieudonné, Jean (1988), Treatise on Analysis, Vol. VIII, Academic Press, ISBN 978-0-12-215507-9
• Dunford, Nelson; Schwartz, Jacob T. (1963), Linear Operators, Part II Spectral Theory. Self Adjoint Operators in Hilbert space, Wiley Interscience, ISBN 978-0-471-60847-9
• Fock, V.A. (1943), "On the representation of an arbitrary function by an integral involving Legendre's functions with a complex index", C. R. Acad. Sci. URSS, 39
• Hille, Einar (1969), Lectures on Ordinary Differential Equations, Addison-Wesley, ISBN 978-0-201-53083-4
• Kodaira, Kunihiko (1949), "The eigenvalue problem for ordinary differential equations of the second order and Heisenberg's theory of S-matrices", American Journal of Mathematics, 71 (4): 921–945, doi:10.2307/2372377, JSTOR 2372377
• Kodaira, Kunihiko (1950), "On ordinary differential equations of any even order and the corresponding eigenfunction expansions", American Journal of Mathematics, 72 (3): 502–544, doi:10.2307/2372051, JSTOR 2372051
• Kolmogorov, A.N.; Fomin, S.V. (1975). Introductory Real Analysis. Dover. ISBN 978-0-486-61226-3.
• Kostenko, Aleksey; Sakhnovich, Alexander; Teschl, Gerald (2012), "Weyl–Titchmarsh Theory for Schrödinger Operators with Strongly Singular Potentials", Int Math Res Notices, 2012, arXiv:1007.0136, doi:10.1093/imrn/rnr065
• Lebedev, N.N. (1972), Special Functions and Their Applications, Dover, ISBN 978-0-486-60624-8
• Loomis, Lynn H. (1953), An Introduction to Abstract Harmonic Analysis, van Nostrand
• Mehler, F.G. (1881), "Ueber mit der Kugel- und Cylinderfunctionen verwandte Function und ihre Anwendung in der Theorie der Elektricitätsverteilung", Mathematische Annalen, 18 (2): 161–194, doi:10.1007/BF01445847, S2CID 122590188
• Reed, Michael; Simon, Barry (1975), Methods of Modern Mathematical Physics II, Fourier Analysis, Self-Adjointness, Academic Press, ISBN 978-0-12-585002-5
• Riesz, Frigyes; Szőkefalvi-Nagy, Béla (1990). Functional Analysis. Dover Publications. ISBN 0-486-66289-6.
• Stone, Marshall Harvey (1932), Linear transformations in Hilbert space and Their Applications to Analysis, AMS Colloquium Publications, vol. 16, ISBN 978-0-8218-1015-6
• Terras, Audrey (1984), "Non-Euclidean harmonic analysis, the central limit theorem, and long transmission lines with random inhomogeneities", J. Multivariate Anal., 15 (2), doi:10.1016/0047-259X(84)90031-9
• Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. AMS Graduate Studies in Mathematics. Vol. 99. ISBN 978-0-8218-4660-5.
• Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. AMS Graduate Studies in Mathematics. Vol. 140. ISBN 978-0-8218-8328-0.
• Titchmarsh, Edward Charles (1939), Theory of Functions, Oxford University Press
• Titchmarsh, Edward Charles (1946), Eigenfunction expansions associated with second order differential equations, Vol. I (1st ed.), Oxford University Press
• Titchmarsh, Edward Charles (1962), Eigenfunction expansions associated with second order differential equations, Vol. I (2nd ed.), Oxford University Press, ISBN 978-0-608-08254-7
• Eckhardt, Jonathan; Teschl, Gerald (2013), "Sturm–Liouville operators with measure-valued coefficients", Journal d'Analyse Mathématique, 120, arXiv:1105.3755, doi:10.1007/s11854-013-0018-x
• Vilenkin, Naoum Iakovlevitch (1968). Special Functions and the Theory of Group Representations. Translations of Mathematical Monographs. Vol. 22. American Mathematical Society. ISBN 978-0-8218-1572-4.
• Weidmann, Joachim (1987). Spectral Theory of Ordinary Differential Operators. Lecture Notes in Mathematics. Vol. 1258. Springer-Verlag. ISBN 978-0-387-17902-5.
• Weyl, Hermann (1910a), "Über gewöhnliche Differentialgleichungen mit Singularitäten und die zugehörigen Entwicklungen willkürlicher Functionen", Mathematische Annalen, 68 (2): 220–269, doi:10.1007/BF01474161, S2CID 119727984
• Weyl, Hermann (1910b), "Über gewöhnliche Differentialgleichungen mit Singulären Stellen und ihre Eigenfunktionen", Nachr. Akad. Wiss. Göttingen. Math.-Phys.: 442–446
• Weyl, Hermann (1935), "Über das Pick-Nevanlinnasche Interpolationsproblem und sein infinitesimales Analogen", Annals of Mathematics, 36 (1): 230–254, doi:10.2307/1968677, JSTOR 1968677
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Spectrum of a matrix
In mathematics, the spectrum of a matrix is the set of its eigenvalues.[1][2][3] More generally, if $T\colon V\to V$ is a linear operator on any finite-dimensional vector space, its spectrum is the set of scalars $\lambda $ such that $T-\lambda I$ is not invertible. The determinant of the matrix equals the product of its eigenvalues. Similarly, the trace of the matrix equals the sum of its eigenvalues.[4][5][6] From this point of view, we can define the pseudo-determinant for a singular matrix to be the product of its nonzero eigenvalues (the density of multivariate normal distribution will need this quantity).
In many applications, such as PageRank, one is interested in the dominant eigenvalue, i.e. that which is largest in absolute value. In other applications, the smallest eigenvalue is important, but in general, the whole spectrum provides valuable information about a matrix.
Definition
Let V be a finite-dimensional vector space over some field K and suppose T : V → V is a linear map. The spectrum of T, denoted σT, is the multiset of roots of the characteristic polynomial of T. Thus the elements of the spectrum are precisely the eigenvalues of T, and the multiplicity of an eigenvalue λ in the spectrum equals the dimension of the generalized eigenspace of T for λ (also called the algebraic multiplicity of λ).
Now, fix a basis B of V over K and suppose M ∈ MatK (V) is a matrix. Define the linear map T : V → V pointwise by Tx = Mx, where on the right-hand side x is interpreted as a column vector and M acts on x by matrix multiplication. We now say that x ∈ V is an eigenvector of M if x is an eigenvector of T. Similarly, λ ∈ K is an eigenvalue of M if it is an eigenvalue of T, and with the same multiplicity, and the spectrum of M, written σM, is the multiset of all such eigenvalues.
Related notions
The eigendecomposition (or spectral decomposition) of a diagonalizable matrix is a decomposition of a diagonalizable matrix into a specific canonical form whereby the matrix is represented in terms of its eigenvalues and eigenvectors.
The spectral radius of a square matrix is the largest absolute value of its eigenvalues. In spectral theory, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements in the spectrum of that operator.
Notes
1. Golub & Van Loan (1996, p. 310)
2. Kreyszig (1972, p. 273)
3. Nering (1970, p. 270)
4. Golub & Van Loan (1996, p. 310)
5. Herstein (1964, pp. 271–272)
6. Nering (1970, pp. 115–116)
References
• Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8
• Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016
• Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8
• Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646
|
Wikipedia
|
Spectrum of a sentence
In mathematical logic, the spectrum of a sentence is the set of natural numbers occurring as the size of a finite model in which a given sentence is true. By a result in descriptive complexity, a set of natural numbers is a spectrum if and only if it can be recognized in non-deterministic exponential time.
Definition
Let ψ be a sentence in first-order logic. The spectrum of ψ is the set of natural numbers n such that there is a finite model for ψ with n elements.
If the vocabulary for ψ consists only of relational symbols, then ψ can be regarded as a sentence in existential second-order logic (ESOL) quantified over the relations, over the empty vocabulary. A generalised spectrum is the set of models of a general ESOL sentence.
Examples
• The spectrum of the first-order formula
$\exists z,o~\forall a,b,c~\exists d,e$
$a+z=a=z+a~\land ~a\cdot z=z=z\cdot a~\land ~a+d=z$
$\land ~a+b=b+a~\land ~a\cdot (b+c)=a\cdot b+a\cdot c~\land ~(a+b)+c=a+(b+c)$
$\land ~a\cdot o=a=o\cdot a~\land ~a\cdot e=o~\land ~(a\cdot b)\cdot c=a\cdot (b\cdot c)$
is $\{p^{n}\mid p{\text{ prime}},n\in \mathbb {N} \}$, the set of powers of a prime number. Indeed, with $z$ for $0$ and $o$ for $1$, this sentence describes the set of fields; the cardinality of a finite field is the power of a prime number.
• The spectrum of the monadic second-order logic formula $\exists S,T~\forall x~\left\{x\in S\iff x\not \in T\land ~f(f(x))=x\land ~x\in S\iff f(x)\in T\right\}$ is the set of even numbers. Indeed, $f$ is a bijection between $S$ and $T$, and $S$ and $T$ are a partition of the universe. Hence the cardinality of the universe is even.
• The set of finite and co-finite sets is the set of spectra of first-order logic with the successor relation.
• The set of ultimately periodic sets is the set of spectra of monadic second-order logic with a unary function. It is also the set of spectra of monadic second-order logic with the successor function.
Descriptive complexity
Fagin's theorem is a result in descriptive complexity theory that states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It is remarkable since it is a characterization of the class NP that does not invoke a model of computation such as a Turing machine. The theorem was proven by Ronald Fagin in 1974 (strictly, in 1973 in his doctoral thesis).
As a corollary, Jones and Selman showed that a set is a spectrum if and only if it is in the complexity class NEXP.[1]
One direction of the proof is to show that, for every first-order formula $\varphi $, the problem of determining whether there is a model of the formula of cardinality n is equivalent to the problem of satisfying a formula of size polynomial in n, which is in NP(n) and thus in NEXP of the input to the problem (the number n in binary form, which is a string of size log(n)).
This is done by replacing every existential quantifier in $\varphi $ with disjunction over all the elements in the model and replacing every universal quantifier with conjunction over all the elements in the model. Now every predicate is on elements in the model, and finally every appearance of a predicate on specific elements is replaced by a new propositional variable. Equalities are replaced by their truth values according to their assignments.
For example:
$\forall {x}\forall {y}\left(P(x)\wedge P(y)\right)\rightarrow (x=y)$
For a model of cardinality 2 (i.e. n= 2) is replaced by
${\big (}\left(P(a_{1})\wedge P(a_{1})\right)\rightarrow (a_{1}=a_{1}){\big )}\wedge {\big (}\left(P(a_{1})\wedge P(a_{2})\right)\rightarrow (a_{1}=a_{2}){\big )}\wedge {\big (}\left(P(a_{2})\wedge P(a_{1})\right)\rightarrow (a_{2}=a_{1}){\big )}\wedge {\big (}\left(P(a_{2})\wedge P(a_{2})\right)\rightarrow (a_{2}=a_{2}){\big )}$
Which is then replaced by ${\big (}\left(p_{1}\wedge p_{1}\right)\rightarrow \top {\big )}\wedge {\big (}\left(p_{1}\wedge p_{2}\right)\rightarrow \bot {\big )}\wedge {\big (}\left(p_{2}\wedge p_{1}\right)\rightarrow \bot {\big )}\wedge {\big (}\left(p_{2}\wedge p_{2}\right)\rightarrow \top {\big )}$
where $\top $ is truth, $\bot $ is falsity, and $p_{1}$, $p_{2}$ are propositional variables. In this particular case, the last formula is equivalent to $\neg (p_{1}\wedge p_{2})$, which is satisfiable.
The other direction of the proof is to show that, for every set of binary strings accepted by a non-deterministic Turing machine running in exponential time ($2^{cx}$ for input length x), there is a first-order formula $\varphi $ such that the set of numbers represented by these binary strings are the spectrum of $\varphi $.
Jones and Selman mention that the spectrum of first-order formulas without equality is just the set of all natural numbers not smaller than some minimal cardinality.
Other properties
The set of spectra of a theory is closed under union, intersection, addition, and multiplication. In full generality, it is not known if the set of spectra of a theory is closed by complementation; this is the so-called Asser's problem. By the result of Jones and Selman, it is equivalent to the problem of whether NEXPTIME = co-NEXPTIME; that is, whether NEXPTIME is closed under complementation.[2]
See also
• Spectrum of a theory
References
1. Jones, Neil D.; Selman, Alan L. (1974). "Turing machines and the spectra of first-order formulas". J. Symb. Log. 39 (1): 139–150. doi:10.2307/2272354. JSTOR 2272354. Zbl 0288.02021.
2. Szwast, Wiesław (1990). "On the generator problem". Zeitschrift für Mathematische Logik und Grundlagen der Mathematik. 36 (1): 23–27. doi:10.1002/malq.19900360105. MR 1030536.
• Fagin, Ronald (1974). "Generalized First-Order Spectra and Polynomial-Time Recognizable Sets" (PDF). In Karp, Richard M. (ed.). Complexity of Computation. Proc. Syp. App. Math. SIAM-AMS Proceedings. Vol. 7. pp. 27–41. Zbl 0303.68035.
• Grädel, Erich; Kolaitis, Phokion G.; Libkin, Leonid; Maarten, Marx; Spencer, Joel; Vardi, Moshe Y.; Venema, Yde; Weinstein, Scott (2007). Finite model theory and its applications. Texts in Theoretical Computer Science. An EATCS Series. Berlin: Springer-Verlag. doi:10.1007/3-540-68804-8. ISBN 978-3-540-00428-8. Zbl 1133.03001.
• Immerman, Neil (1999). Descriptive Complexity. Graduate Texts in Computer Science. New York: Springer-Verlag. pp. 113–119. ISBN 0-387-98600-6. Zbl 0918.68031.
• Durand, Arnaud; Jones, Neil; Markowsky, Johann; More, Malika (2012). "Fifty Years of the Spectrum Problem: Survey and New Results". Bulletin of Symbolic Logic. 18 (4): 505–553. arXiv:0907.5495. Bibcode:2009arXiv0907.5495D. doi:10.2178/bsl.1804020. S2CID 9507429.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Mental calculation
Mental calculation consists of arithmetical calculations using only the human brain, with no help from any supplies (such as pencil and paper) or devices such as a calculator. People may use mental calculation when computing tools are not available, when it is faster than other means of calculation (such as conventional educational institution methods), or even in a competitive context. Mental calculation often involves the use of specific techniques devised for specific types of problems. People with unusually high ability to perform mental calculations are called mental calculators or lightning calculators.
Many of these techniques take advantage of or rely on the decimal numeral system. Usually, the choice of radix is what determines which method or methods to use.
Methods and techniques
Casting out nines
Main article: Casting out nines
After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result:
1. Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0.
2. If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit.
3. Repeat steps one and two with the second operand. At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand.[lower-alpha 1]
4. Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation.
5. Sum the digits of the result that were originally obtained for the original calculation.
6. If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.
Example
• Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:
1. Sum the digits of 6,338: (6 + 3 = 9, so count that as 0) + 3 + 8 = 11
2. Iterate as needed: 1 + 1 = 2
3. Sum the digits of 79: 7 + (9 counted as 0) = 7
4. Perform the original operation on the condensed operands, and sum digits: 2 × 7 = 14; 1 + 4 = 5
5. Sum the digits of 500702: 5 + 0 + 0 + (7 + 0 + 2 = 9, which counts as 0) = 5
6. 5 = 5, so there is a good chance that the prediction that 6,338 × 79 equals 500,702 is right.
The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation.
Factors
When multiplying, a useful thing to remember is that the factors of the operands still remain. For example, to say that 14 × 15 was 201 would be unreasonable. Since 15 is a multiple of 5, the product should be as well. Likewise, 14 is a multiple of 2, so the product should be even. Furthermore, any number which is a multiple of both 5 and 2 is necessarily a multiple of 10, and in the decimal system would end with a 0. The correct answer is 210. It is a multiple of 10, 7 (the other prime factor of 14) and 3 (the other prime factor of 15).
Direct calculation
When the digits of b are all smaller than the corresponding digits of a, the calculation can be done digit by digit. For example, evaluate 872 − 41 simply by subtracting 1 from 2 in the units place, and 4 from 7 in the tens place: 831.
Indirect calculation
When the above situation does not apply, there is another method known as indirect calculation.
Look-ahead borrow method
This method can be used to subtract numbers left to right, and if all that is required is to read the result aloud, it requires little of the user's memory even to subtract numbers of arbitrary size.
One place at a time is handled, left to right.
Example:
4075
− 1844
------
Thousands: 4 − 1 = 3, look to right, 075 < 844, need to borrow.
3 − 1 = 2, say "Two thousand".
One is performing 3 - 1 rather than 4 - 1 because the column to the right is
going to borrow from the thousands place.
Hundreds: 0 − 8 = negative numbers not allowed here.
One is going to increase this place by using the number one borrowed from the
column to the left. Therefore:
10 − 8 = 2. It's 10 rather than 0, because one borrowed from the Thousands
place. 75 > 44 so no need to borrow,
say "two hundred"
Tens: 7 − 4 = 3, 5 > 4, so 5 - 4 = 1
Hence, the result is 2231.
Calculating products: a × b
Many of these methods work because of the distributive property.
Multiplying any two numbers by attaching, subtracting, and routing
Discovered by Artem Cheprasov, there is a method of multiplication that allows the user to utilize 3 steps to quickly multiply numbers of any size to one another via three unique ways.[1][2]
First, the method allows the user to attach numbers to one another, as opposed to adding or subtracting them, during intermediate steps in order to quicken the rate of multiplication. For instance, instead of adding or subtracting intermediary results such as 357 and 84, the user could simply attach the numbers together (35784) in order to simplify and expedite the multiplication problem. Attaching numbers to one another helps to bypass unnecessary steps found in traditional multiplication techniques.
Secondly, this method uses negative numbers as necessary, even when multiplying two positive integers, in order to quicken the rate of multiplication via subtraction. This means two positive integers can be multiplied together to get negative intermediate steps, yet still the correct positive answer in the end. These negative numbers are actually automatically derived from the multiplication steps themselves and are thus unique to a particular problem. Again, such negative intermediate steps are designed to help hasten the mental math.
Finally, another unique aspect of using this method is that the user is able to choose one of several different “routes of multiplication” to the specific multiplication problem at hand based on their subjective preferences or strengths and weaknesses with particular integers.
Despite the same starting integers, the different multiplication routes give off different intermediate numbers that are automatically derived for the user as they multiply. Some of these intermediaries may be easier than others (e.g. some users may find a route that uses a negative 7, while another route uses a 5 or a 0, which are typically easier to work with mentally for most people, but not in all instances).
If one “route” seems to be harder for one student vs. another route and its intermediate numbers, that student can simply choose another simpler route of multiplication for themselves even though it's the same original problem.
The "Ends of Five" Formula
For any 2-digit by 2-digit multiplication problem, if both numbers end in five, the following algorithm can be used to quickly multiply them together:[1]
$\mathrm {Ex} :35\times 75$
As a preliminary step simply round the smaller number down and the larger up to the nearest multiple of ten. In this case:
$35-5=30=X$
$75+5=80=Y$
The algorithm reads as follows:
$(X\times Y)+50(t_{1}-t_{2})+25$
Where t1 is the tens unit of the original larger number (75) and t2 is the tens unit of the original smaller number (35).
$=30\times 80+50(7-3)+25=2625$
The author also outlines another similar algorithm if one wants to round the original larger number down and the original smaller number up instead.
The "Borrower's" Formula
If two numbers are equidistant from the nearest multiple of 100, then a simple algorithm can be used to find the product.[1]
As a simple example:
$33\times 67$
Both numbers are equidistant (33 away) from their nearest multiple of 100 (0 and 100, respectively).
As a preliminary step simply round the smaller number down and the larger up to the nearest multiple of ten. In this case:
$33-3=30=X$
$67+3=70=Y$
The algorithm reads as follows:
$(X\times Y)+u_{1}\times u_{2}+u_{2}(T_{1}-T_{2})$
Where u1 is the original larger number's (67) units digit and u2 is the original smaller number's (33) units digit. T1 is the original larger number's tens digit and T2 is the original larger number's tens digit multiplied by their respective power (in this case by 10, for a tens digit).
And so:
$(30\times 70)+7\times 3+3(60-30)=2100+21+90=2211$
Multiplying any 2-digit numbers
To easily multiply any 2-digit numbers together a simple algorithm is as follows (where a is the tens digit of the first number, b is the ones digit of the first number, c is the tens digit of the second number and d is the ones digit of the second number):
$(10a+b)\cdot (10c+d)$
$=100(a\cdot c)+10(b\cdot c)+10(a\cdot d)+b\cdot d$
For example,
$23\cdot 47=100(2\cdot 4)+10(3\cdot 4)+10(2\cdot 7)+3\cdot 7$
800
+120
+140
+ 21
-----
1081
Note that this is the same thing as the conventional sum of partial products, just restated with brevity. To minimize the number of elements being retained in one's memory, it may be convenient to perform the sum of the "cross" multiplication product first, and then add the other two elements:
$(a\cdot d+b\cdot c)\cdot 10$
${}+b\cdot d$ [of which only the tens digit will interfere with the first term]
${}+a\cdot c\cdot 100$
i.e., in this example
(12 + 14) = 26, 26 × 10 = 260,
to which is it is easy to add 21: 281 and then 800: 1081
An easy mnemonic to remember for this would be FOIL. F meaning first, O meaning outer, I meaning inner and L meaning last. For example:
$75\cdot 23$
and
$ab\cdot cd$
where 7 is a, 5 is b, 2 is c and 3 is d.
Consider
$a\cdot c\cdot 100+(a\cdot d+b\cdot c)\cdot 10+b\cdot d$
this expression is analogous to any number in base 10 with a hundreds, tens and ones place. FOIL can also be looked at as a number with F being the hundreds, OI being the tens and L being the ones.
$a\cdot c$ is the product of the first digit of each of the two numbers; F.
$(a\cdot d+b\cdot c)$ is the addition of the product of the outer digits and the inner digits; OI.
$b\cdot d$ is the product of the last digit of each of the two numbers; L.
Multiplying by 2 or other small numbers
Where one number being multiplied is sufficiently small to be multiplied with ease by any single digit, the product can be calculated easily digit by digit from right to left. This is particularly easy for multiplication by 2 since the carry digit cannot be more than 1.
For example, to calculate 2 × 167: 2×7=14, so the final digit is 4, with a 1 carried and added to the 2×6 = 12 to give 13, so the next digit is 3 with a 1 carried and added to the 2×1=2 to give 3. Thus, the product is 334.
Multiplying by 5
To multiply a number by 5,
1. First multiply that number by 10, then divide it by 2. The two steps are interchangeable i.e. one can halve the number and then multiply it.
The following algorithm is a quick way to produce this result:
2. Add a zero to right side of the desired number. (A.) 3. Next, starting from the leftmost numeral, divide by 2 (B.) and append each result in the respective order to form a new number;(fraction answers should be rounded down to the nearest whole number).
EXAMPLE: Multiply 176 by 5.
A. Add a zero to 176 to make 1760.
B. Divide by 2 starting at the left.
1. Divide 1 by 2 to get .5, rounded down to zero.
2. Divide 7 by 2 to get 3.5, rounded down to 3.
3. Divide 6 by 2 to get 3. Zero divided by two is simply zero.
The resulting number is 0330. (This is not the final answer, but a first approximation which will be adjusted in the following step:)
C. Add 5 to the number that follows any single numeral
in this new number that was odd before dividing by two;
EXAMPLE: 176 (IN FIRST, SECOND, THIRD PLACES):
1.The FIRST place is 1, which is odd. ADD 5 to the numeral after
the first place in the new number (0330) which is 3; 3+5=8.
2.The number in the second place of 176, 7, is also odd. The
corresponding number (0 8 3 0) is increased by 5 as well;
3+5=8.
3.The numeral in the third place of 176, 6, is even, therefore
the final number, zero, in the answer is not changed. That
final answer is 0880.
The leftmost zero can be omitted, leaving 880.
So 176 times 5 equals 880.
EXAMPLE: Multiply 288 by 5.
A. Divide 288 by 2. One can divide each digit individually to get 144. (Dividing smaller number is easier.)
B. Multiply by 10. Add a zero to yield the result 1440.
Multiplying by 9
Since 9 = 10 − 1, to multiply a number by nine, multiply it by 10 and then subtract the original number from the result. For example, 9 × 27 = 270 − 27 = 243.
This method can be adjusted to multiply by eight instead of nine, by doubling the number being subtracted; 8 × 27 = 270 − (2×27) = 270 − 54 = 216.
Similarly, by adding instead of subtracting, the same methods can be used to multiply by 11 and 12, respectively (although simpler methods to multiply by 11 exist).
Using hands: 1–10 multiplied by 9
To use this method, one must place their hands in front of them, palms facing towards them. Assign the left thumb to be 1, the left index to be 2, and so on all the way to the right thumb is ten. Each "|" symbolizes a raised finger and a "−" represents a bent finger.
1 2 3 4 5 6 7 8 9 10
| | | | | | | | | |
left hand right hand
Bend the finger which represents the number to be multiplied by nine down.
Ex: 6 × 9 would be
| | | | | − | | | |
The right little finger is down. Take the number of fingers still raised to the left of the bent finger and prepend it to the number of fingers to the right.
Ex: There are five fingers left of the right little finger and four to the right of the right little finger. So 6 × 9 = 54.
5 4
| | | | | − | | | |
Multiplying by 10 (and powers of ten)
To multiply an integer by 10, simply add an extra 0 to the end of the number. To multiply a non-integer by 10, move the decimal point to the right one digit.
In general for base ten, to multiply by 10n (where n is an integer), move the decimal point n digits to the right. If n is negative, move the decimal |n| digits to the left.
Multiplying by 11
For single digit numbers simply duplicate the number into the tens digit, for example: 1 × 11 = 11, 2 × 11 = 22, up to 9 × 11 = 99.
The product for any larger non-zero integer can be found by a series of additions to each of its digits from right to left, two at a time.
First take the ones digit and copy that to the temporary result. Next, starting with the ones digit of the multiplier, add each digit to the digit to its left. Each sum is then added to the left of the result, in front of all others. If a number sums to 10 or higher take the tens digit, which will always be 1, and carry it over to the next addition. Finally copy the multipliers left-most (highest valued) digit to the front of the result, adding in the carried 1 if necessary, to get the final product.
In the case of a negative 11, multiplier, or both apply the sign to the final product as per normal multiplication of the two numbers.
A step-by-step example of 759 × 11:
1. The ones digit of the multiplier, 9, is copied to the temporary result.
• result: 9
2. Add 5 + 9 = 14 so 4 is placed on the left side of the result and carry the 1.
• result: 49
3. Similarly add 7 + 5 = 12, then add the carried 1 to get 13. Place 3 to the result and carry the 1.
• result: 349
4. Add the carried 1 to the highest valued digit in the multiplier, 7 + 1 = 8, and copy to the result to finish.
• Final product of 759 × 11: 8349
Further examples:
• −54 × −11 = 5 5+4(9) 4 = 594
• 999 × 11 = 9+1(10) 9+9+1(9) 9+9(8) 9 = 10989
• Note the handling of 9+1 as the highest valued digit.
• −3478 × 11 = 3 3+4+1(8) 4+7+1(2) 7+8(5) 8 = −38258
• 62473 × 11 = 6 6+2(8) 2+4+1(7) 4+7+1(2) 7+3(0) 3 = 687203
Another method is to simply multiply the number by 10, and add the original number to the result.
For example:
17 × 11
17 × 10 = 170
170 + 17 = 187
17 × 11 = 187
One last easy way:
If one has a two-digit number, take it and add the two numbers together and put that sum in the middle, and one can get the answer.
For example: 24 x 11 = 264 because 2 + 4 = 6 and the 6 is placed in between the 2 and the 4.
Second example: 87 x 11 = 957 because 8 + 7 = 15 so the 5 goes in between the 8 and the 7 and the 1 is carried to the 8. So it is basically 857 + 100 = 957.
Or if 43 x 11 is equal to first 4+3=7 (For the tens digit) Then 4 is for the hundreds and 3 is for the tens. And the answer is 473
Multiplying two 2 digit numbers between 11 and 19
To easily multiply 2 digit numbers together between 11 and 19 a simple algorithm is as follows (where a is the ones digit of the first number and b is the ones digit of the second number):
(10+a)×(10+b)
100 + 10×(a+b) + a×b
which can be visualized as three parts to be added:
1
xx
yy
for example:
17×16
1 = 100
13 (7+6) = 10×(a+b)
42 (7×6) = a×b
272 (total)
Using hands: 6–10 multiplied by another number 6–10
This technique allows a number from 6 to 10 to be multiplied by another number from 6 to 10.
Assign 6 to the little finger, 7 to the ring finger, 8 to the middle finger, 9 to the index finger, and 10 to the thumb. Touch the two desired numbers together. The point of contact and below is considered the "bottom" section and everything above the two fingers that are touching are part of the "top" section. The answer is formed by adding ten times the total number of "bottom" fingers to the product of the number of left- and right-hand "top" fingers.
For example, 9 × 6 would look like this, with the left index finger touching the right little finger:
=10== :right thumb (top)
==9== :right index finger (top)
==8== :right middle finger (top)
left thumb: =10== ==7== :right ring finger (top)
left index finger: --9---><---6-- :right little finger (BOTTOM)
left middle finger: --8-- (BOTTOM)
left ring finger: --7-- (BOTTOM)
left little finger: --6-- (BOTTOM)
In this example, there are 5 "bottom" fingers (the left index, middle, ring, and little fingers, plus the right little finger), 1 left "top" finger (the left thumb), and 4 right "top" fingers (the right thumb, index finger, middle finger, and ring finger). So the computation goes as follows: 9 × 6 = (10 × 5) + (1 × 4) = 54.
Consider another example, 8 × 7:
=10== :right thumb (top)
left thumb: =10== ==9== :right index finger (top)
left index finger: ==9== ==8== :right middle finger (top)
left middle finger: --8---><---7-- :right ring finger (BOTTOM)
left ring finger: --7-- --6-- :right little finger (BOTTOM)
left little finger: --6-- (BOTTOM)
Five bottom fingers make 5 tens, or 50. Two top left fingers and three top right fingers make the product 6. Summing these produces the answer, 56.
Another example, this time using 6 × 8:
--8---><---6--
--7--
--6--
Four tens (bottom), plus two times four (top) gives 40 + 2 × 4 = 48.
Here's how it works: each finger represents a number between 6 and 10. When one joins fingers representing x and y, there will be 10 - x "top" fingers and x - 5 "bottom" fingers on the left hand; the right hand will have 10 - y "top" fingers and y - 5 "bottom" fingers.
Let
$\,t_{L}=10-x\,$ (the number of "top" fingers on the left hand)
$\,t_{R}=10-y\,$ (the number of "top" fingers on the right hand)
$\,b_{L}=x-5\,$ (the number of "bottom" fingers on the left hand)
$\,b_{R}=y-5\,$ (the number of "bottom" fingers on the right hand)
Then following the above instructions produces
$\,10(b_{L}+b_{R})+t_{L}t_{R}$
$\,=10[(x-5)+(y-5)]+(10-x)(10-y)$
$\,=10(x+y-10)+(100-10x-10y+xy)$
$\,=[10(x+y)-100]+[100-10(x+y)+xy]$
$\,=[10(x+y)-10(x+y)]+[100-100]+xy$
$\,=xy$
which is the product desired.
Multiplying two numbers close to and below 100
This technique allows easy multiplication of numbers close and below 100.(90-99)[3] The variables will be the two numbers one multiplies.
The product of two variables ranging from 90-99 will result in a 4-digit number. The first step is to find the ones-digit and the tens digit.
Subtract both variables from 100 which will result in 2 one-digit number. The product of the 2 one-digit numbers will be the last two digits of one's final product.
Next, subtract one of the two variables from 100. Then subtract the difference from the other variable. That difference will be the first two digits of the final product, and the resulting 4 digit number will be the final product.
Example:
95
x 97
----
Last two digits: 100-95=5 (subtract first number from 100)
100-97=3 (subtract second number from 100)
5*3=15 (multiply the two differences)
Final Product- yx15
First two digits: 100-95=5 (Subtract the first number of the equation from 100)
97-5=92 (Subtract that answer from the second number of the equation)
Now, the difference will be the first two digits
Final Product- 9215
Alternate for first two digits
5+3=8 (Add the two single digits derived when calculating "Last two digits" in previous step)
100-8=92 (Subtract that answer from 100)
Now, the difference will be the first two digits
Final Product- 9215
Using square numbers
The products of small numbers may be calculated by using the squares of integers; for example, to calculate 13 × 17, one can remark 15 is the mean of the two factors, and think of it as (15 − 2) × (15 + 2), i.e. 152 − 22. Knowing that 152 is 225 and 22 is 4, simple subtraction shows that 225 − 4 = 221, which is the desired product.
This method requires knowing by heart a certain number of squares:
12 = 162 = 36112 = 121162 = 256212 = 441262 = 676
22 = 472 = 49122 = 144172 = 289222 = 484272 = 729
32 = 982 = 64132 = 169182 = 324232 = 529282 = 784
42 = 1692 = 81142 = 196192 = 361242 = 576292 = 841
52 = 25102 = 100152 = 225202 = 400252 = 625302 = 900
Squaring numbers
It may be useful to be aware that the difference between two successive square numbers is the sum of their respective square roots. Hence, if one knows that 12 × 12 = 144 and wish to know 13 × 13, calculate 144 + 12 + 13 = 169.
This is because (x + 1)2 − x2 = x2 + 2x + 1 − x2 = x + (x + 1)
x2 = (x − 1)2 + (2x − 1)
Squaring any number
Take a given number, and add and subtract a certain value to it that will make it easier to multiply. For example:
4922
492 is close to 500, which is easy to multiply by. Add and subtract 8 (the difference between 500 and 492) to get
492 -> 484, 500
Multiply these numbers together to get 242,000 (This can be done efficiently by dividing 484 by 2 = 242 and multiplying by 1000). Finally, add the difference (8) squared (82 = 64) to the result:
4922 = 242,064
The proof follows:
$n^{2}=n^{2}$
$n^{2}=(n^{2}-a^{2})+a^{2}$
$n^{2}=(n^{2}-an+an-a^{2})+a^{2}$
$n^{2}=(n-a)(n+a)+a^{2}$
Squaring any 2-digit integer
This method requires memorization of the squares of the one-digit numbers 1 to 9.
The square of mn, mn being a two-digit integer, can be calculated as
10 × m(mn + n) + n2
Meaning the square of mn can be found by adding n to mn, multiplied by m, adding 0 to the end and finally adding the square of n.
For example, 232:
232
= 10 × 2(23 + 3) + 32
= 10 × 2(26) + 9
= 520 + 9
= 529
So 232 = 529.
Squaring a number ending in 5
1. Take the digit(s) that precede the five: abc5, where a, b, and c are digits
2. Multiply this number by itself plus one: abc(abc + 1)
3. Take above result and attach 25 to the end
• Example: 85 × 85
1. 8
2. 8 × 9 = 72
3. So, 852 = 7,225
• Example: 1252
1. 12
2. 12 × 13 = 156
3. So, 1252 = 15,625
• Mathematical explanation
(10x + 5)2= (10x + 5)(10x + 5)
= 100x2 + 100x + 25
= 100(x2 + x) + 25
= 100x(x + 1) + 25
Squaring numbers very close to 50
Suppose one needs to square a number n near 50.
The number may be expressed as n = 50 − a so its square is (50−a)2 = 502 − 100a + a2. One knows that 502 is 2500. So one subtracts 100a from 2500, and then add a2.
For example, say one wants to square 48, which is 50 − 2. One subtracts 200 from 2500 and add 4, and get n2 = 2304. For numbers larger than 50 (n = 50 + a), add 100×a instead of subtracting it.
Squaring an integer from 26 to 74
This method requires the memorization of squares from 1 to 24.
The square of n (most easily calculated when n is between 26 and 74 inclusive) is
(50 − n)2 + 100(n − 25)
In other words, the square of a number is the square of its difference from fifty added to one hundred times the difference of the number and twenty five. For example, to square 62:
(−12)2 + [(62-25) × 100]
= 144 + 3,700
= 3,844
Squaring an integer near 100 (e.g., from 76 to 124)
This method requires the memorization of squares from 1 to a where a is the absolute difference between n and 100. For example, students who have memorized their squares from 1 to 24 can apply this method to any integer from 76 to 124.
The square of n (i.e., 100 ± a) is
100(100 ± 2a) + a2
In other words, the square of a number is the square of its difference from 100 added to the product of one hundred and the difference of one hundred and the product of two and the difference of one hundred and the number. For example, to square 93:
100(100 − 2(7)) + 72
= 100 × 86 + 49
= 8,600 + 49
= 8,649
Another way to look at it would be like this:
932 = ? (is −7 from 100)
93 − 7 = 86 (this gives the first two digits)
(−7)2 = 49 (these are the second two digits)
932 = 8649
Another example:
822 = ? (is −18 from 100)
82 − 18 = 64 (subtract. First digits.)
(−18)2 = 324 (second pair of digits. One will need to carry the 3.)
822 = 6724
Squaring any integer near 10n (e.g., 976 to 1024, 9976 to 10024, etc.)
This method is a straightforward extension of the explanation given above for squaring an integer near 100.
10122 = ? (1012 is +12 from 1000)
(+12)2 = 144 (n trailing digits)
1012 + 12 = 1024 (leading digits)
10122 = 1024144
99972 = ? (9997 is -3 from 10000)
(-3)2 = 0009 (n trailing digits)
9997 - 3 = 9994 (leading digits)
99972 = 99940009
Squaring any integer near m × 10n (e.g., 276 to 324, 4976 to 5024, 79976 to 80024)
This method is a straightforward extension of the explanation given above for integers near 10n.
4072 = ? (407 is +7 from 400)
(+7)2 = 49 (n trailing digits)
407 + 7 = 414
414 × 4 = 1656 (leading digits; note this multiplication by m wasn't needed for integers from 76 to 124 because their m = 1)
4072 = 165649
799912 = ? (79991 is -9 from 80000)
(-9)2 = 0081 (n trailing digits)
79991 - 9
79982 × 8 = 639856 (leading digits)
799912 = 6398560081
Approximating square roots
An easy way to approximate the square root of a number is to use the following equation:
${\text{root }}\simeq {\text{ known square root}}-{\frac {{\text{known square}}-{\text{unknown square}}}{2\times {\text{known square root}}}}\,$
The closer the known square is to the unknown, the more accurate the approximation. For instance, to estimate the square root of 15, one could start with the knowledge that the nearest perfect square is 16 (42).
${\begin{aligned}{\text{root}}&\simeq 4-{\frac {16-15}{2\times 4}}\\&\simeq 4-0.125\\&\simeq 3.875\\\end{aligned}}\,\!$
So the estimated square root of 15 is 3.875. The actual square root of 15 is 3.872983... One thing to note is that, no matter what the original guess was, the estimated answer will always be larger than the actual answer due to the inequality of arithmetic and geometric means. Thus, one should try rounding the estimated answer down.
Note that if n2 is the closest perfect square to the desired square x and d = x - n2 is their difference, it is more convenient to express this approximation in the form of mixed fraction as $n{\tfrac {d}{2n}}$. Thus, in the previous example, the square root of 15 is $4{\tfrac {-1}{8}}.$ As another example, square root of 41 is $6{\tfrac {5}{12}}=6.416$ while the actual value is 6.4031...
It may simplify mental calculation to notice that this method is equivalent to the mean of the known square and the unknown square, divided by the known square root:
${\text{root }}\simeq {\frac {{\text{mean}}({\text{known square}},{\text{unknown square}})}{\text{known square root}}}\,$
Derivation
By definition, if r is the square root of x, then
$\mathrm {r} ^{2}=x\,\!$
One then redefines the root
$\mathrm {r} =a-b\,\!$
where a is a known root (4 from the above example) and b is the difference between the known root and the answer one seeks.
$(a-b)^{2}=x\,\!$
Expanding yields
$a^{2}-2ab+b^{2}=x\,\!$
If 'a' is close to the target, 'b' will be a small enough number to render the ${}+b^{2}\,$ element of the equation negligible. Thus, one can drop ${}+b^{2}\,$ out and rearrange the equation to
$b\simeq {\frac {a^{2}-x}{2a}}\,\!$
and therefore
$\mathrm {root} \simeq a-{\frac {a^{2}-x}{2a}}\,\!$
that can be reduced to
$\mathrm {root} \simeq {\frac {a^{2}+x}{2a}}\,\!$
Extracting roots of perfect powers
See also: 13th root
Extracting roots of perfect powers is often practiced. The difficulty of the task does not depend on the number of digits of the perfect power but on the precision, i.e. the number of digits of the root. In addition, it also depends on the order of the root; finding perfect roots, where the order of the root is coprime with 10 are somewhat easier since the digits are scrambled in consistent ways, as in the next section.
Extracting cube roots
An easy task for the beginner is extracting cube roots from the cubes of 2-digit numbers. For example, given 74088, determine what two-digit number, when multiplied by itself once and then multiplied by the number again, yields 74088. One who knows the method will quickly know the answer is 42, as 423 = 74088.
Before learning the procedure, it is required that the performer memorize the cubes of the numbers 1-10:
13 = 123 = 833 = 2743 = 6453 = 125
63 = 21673 = 34383 = 51293 = 729103 = 1000
Observe that there is a pattern in the rightmost digit: adding and subtracting with 1 or 3. Starting from zero:
• 03 = 0
• 13 = 1 up 1
• 23 = 8 down 3
• 33 = 27 down 1
• 43 = 64 down 3
• 53 = 125 up 1
• 63 = 216 up 1
• 73 = 343 down 3
• 83 = 512 down 1
• 93 = 729 down 3
• 103 = 1000 up 1
There are two steps to extracting the cube root from the cube of a two-digit number. For example, extracting the cube root of 29791. Determine the one's place (units) of the two-digit number. Since the cube ends in 1, as seen above, it must be 1.
• If the perfect cube ends in 0, the cube root of it must end in 0.
• If the perfect cube ends in 1, the cube root of it must end in 1.
• If the perfect cube ends in 2, the cube root of it must end in 8.
• If the perfect cube ends in 3, the cube root of it must end in 7.
• If the perfect cube ends in 4, the cube root of it must end in 4.
• If the perfect cube ends in 5, the cube root of it must end in 5.
• If the perfect cube ends in 6, the cube root of it must end in 6.
• If the perfect cube ends in 7, the cube root of it must end in 3.
• If the perfect cube ends in 8, the cube root of it must end in 2.
• If the perfect cube ends in 9, the cube root of it must end in 9.
Note that every digit corresponds to itself except for 2, 3, 7 and 8, which are just subtracted from ten to obtain the corresponding digit.
The second step is to determine the first digit of the two-digit cube root by looking at the magnitude of the given cube. To do this, remove the last three digits of the given cube (29791 → 29) and find the greatest cube it is greater than (this is where knowing the cubes of numbers 1-10 is needed). Here, 29 is greater than 1 cubed, greater than 2 cubed, greater than 3 cubed, but not greater than 4 cubed. The greatest cube it is greater than is 3, so the first digit of the two-digit cube must be 3.
Therefore, the cube root of 29791 is 31.
Another example:
• Find the cube root of 456533.
• The cube root ends in 7.
• After the last three digits are taken away, 456 remains.
• 456 is greater than all the cubes up to 7 cubed.
• The first digit of the cube root is 7.
• The cube root of 456533 is 77.
This process can be extended to find cube roots that are 3 digits long, by using arithmetic modulo 11.[4]
These types of tricks can be used in any root where the order of the root is coprime with 10; thus it fails to work in square root, since the power, 2, divides into 10. 3 does not divide 10, thus cube roots work.
Approximating common logarithms (log base 10)
To approximate a common logarithm (to at least one decimal point accuracy), a few logarithm rules, and the memorization of a few logarithms is required. One must know:
• log(a × b) = log(a) + log(b)
• log(a / b) = log(a) - log(b)
• log(0) does not exist
• log(1) = 0
• log(2) ~ .30
• log(3) ~ .48
• log(7) ~ .85
From this information, one can find the logarithm of any number 1-9.
• log(1) = 0
• log(2) ~ .30
• log(3) ~ .48
• log(4) = log(2 × 2) = log(2) + log(2) ~ .60
• log(5) = log(10 / 2) = log(10) − log(2) ~ .70
• log(6) = log(2 × 3) = log(2) + log(3) ~ .78
• log(7) ~ .85
• log(8) = log(2 × 2 × 2) = log(2) + log(2) + log(2) ~ .90
• log(9) = log(3 × 3) = log(3) + log(3) ~ .96
• log(10) = 1 + log(1) = 1
The first step in approximating the common logarithm is to put the number given in scientific notation. For example, the number 45 in scientific notation is 4.5 × 101, but one will call it a × 10b. Next, find the logarithm of a, which is between 1 and 10. Start by finding the logarithm of 4, which is .60, and then the logarithm of 5, which is .70 because 4.5 is between these two. Next, and skill at this comes with practice, place a 5 on a logarithmic scale between .6 and .7, somewhere around .653 (NOTE: the actual value of the extra places will always be greater than if it were placed on a regular scale. i.e., one would expect it to go at .650 because it is halfway, but instead, it will be a little larger, in this case, .653) Once one has obtained the logarithm of a, simply add b to it to get the approximation of the common logarithm. In this case, a + b = .653 + 1 = 1.653. The actual value of log(45) ~ 1.65321.
The same process applies for numbers between 0 and 1. For example, 0.045 would be written as 4.5 × 10−2. The only difference is that b is now negative, so when adding one is really subtracting. This would yield the result 0.653 − 2, or −1.347.
Mental arithmetic as a psychological skill
Physical exertion of the proper level can lead to an increase in performance of a mental task, like doing mental calculations, performed afterward.[5] It has been shown that during high levels of physical activity there is a negative effect on mental task performance.[6] This means that too much physical work can decrease accuracy and output of mental math calculations. Physiological measures, specifically EEG, have been shown to be useful in indicating mental workload.[7] Using an EEG as a measure of mental workload after different levels of physical activity can help determine the level of physical exertion that will be the most beneficial to mental performance. Previous work done at Michigan Technological University by Ranjana Mehta includes a recent study that involved participants engaging in concurrent mental and physical tasks.[8] This study investigated the effects of mental demands on physical performance at different levels of physical exertion and ultimately found a decrease in physical performance when mental tasks were completed concurrently, with a more significant effect at the higher level of physical workload. The Brown-Peterson procedure is a widely known task using mental arithmetic. This procedure, mostly used in cognitive experiments, suggests mental subtraction is useful in testing the effects maintenance rehearsal can have on how long short-term memory lasts.
Mental Calculations World Championship
The first Mental Calculations World Championship took place in 1997. This event repeats every year. It consists of a range of different tasks such as addition of ten ten-digit numbers, multiplication of two eight-digit numbers, calculation of square roots, calculation of weekdays for given dates, calculation of cube roots, and some surprise miscellaneous tasks.
Mental Calculation World Cup
Main article: Mental Calculation World Cup
The first World Mental Calculation Championships (Mental Calculation World Cup)[9] took place in 2004. They are repeated every second year. It consists of six different tasks: addition of ten ten-digit numbers, multiplication of two eight-digit numbers, calculation of square roots, and calculation of weekdays for given dates, calculation of cube roots plus some surprise miscellaneous tasks.
Memoriad – World Memory, Mental Calculation & Speed Reading Olympics
Memoriad[10] is the first platform combining "mental calculation", "memory" and "photographic reading" competitions. Games and competitions are held in the year of the Olympic games, every four years.
The first international Memoriad was held in Istanbul, Turkey, in 2008. The second Memoriad took place in Antalya, Turkey on 24–25 November 2012. 89 competitors from 20 countries participated. Awards and money prizes were given for 10 categories in total; of which 5 categories had to do about Mental Calculation (Mental addition, Mental Multiplication, Mental Square Roots (non-integer), Mental Calendar Dates calculation and Flash Anzan).
See also
• Doomsday rule for calculating the day of the week
• Mental abacus
• Mental calculator
• Miksike MentalMath
• Soroban
Notes
1. Note that these single-digit numbers are really the remainders one would end up with if one divided the original operands by 9, which is to say that each is the result of its associated operand mod 9.
References
1. Cheprasov, Artem (September 3, 2009). On a New Method of Multiplication and Shortcuts. United States: CreateSpace Independent Publishing Platform. ISBN 9781448689330.
2. "On the record with ... Artem Cheprasov". Northwest Herald. Archived from the original on 2011-01-15. Retrieved 2015-06-01.
3. multiplying two numbers close, below 100
4. Dorrell, Philip. "How to Do Cube Roots of 9 Digit Numbers in Your Head". Thinking Hard. Retrieved 19 July 2015.
5. Lambourne, Kate; Tomporowski, Phillip (2010). "The effect of exercise-induced arousal on cognitive task performance: A meta-regression analysis". Brain Research. 1341: 12–24. doi:10.1016/j.brainres.2010.03.091. PMID 20381468. S2CID 206324098.
6. Brisswalter, J.; Arcelin, R.; Audiffren, M.; Delignieres, D. (1997). "Influence of Physical Exercise on Simple Reaction Time: Effect of Physical Fitness". Perceptual and Motor Skills. 85 (3): 1019–27. doi:10.2466/pms.1997.85.3.1019. PMID 9399313. S2CID 30781628.
7. Murata, Atsuo (2005). "An Attempt to Evaluate Mental Workload Using Wavelet Transform of EEG". Human Factors: The Journal of the Human Factors and Ergonomics Society. 47 (3): 498–508. doi:10.1518/001872005774860096. PMID 16435692. S2CID 25313835.
8. Mehta, Ranjana K.; Nussbaum, Maury A.; Agnew, Michael J. (2012). "Muscle- and task-dependent responses to concurrent physical and mental workload during intermittent static work". Ergonomics. 55 (10): 1166–79. doi:10.1080/00140139.2012.703695. PMID 22849301. S2CID 38648671.
9. "Mental Calculation World Cup - the World Championship for Mental Calculators". www.recordholders.org.
10. "Memoriad". www.memoriad.com.
External links
• Mental Calculation World Cup
• Memoriad - World Mental Olympics
• Tzourio-Mazoyer, Nathalie; Pesenti, Mauro; Zago, Laure; Crivello, Fabrice; Mellet, Emmanuel; Samson, Dana; Duroux, Bruno; Seron, Xavier; Mazoyer, Bernard (2001). "Mental calculation in a prodigy is sustained by right prefrontal and medial temporal areas". Nature Neuroscience. 4 (1): 103–7. doi:10.1038/82831. PMID 11135652. S2CID 23829063.
• Rivera, S.M.; Reiss, AL; Eckert, MA; Menon, V (2005). "Developmental Changes in Mental Arithmetic: Evidence for Increased Functional Specialization in the Left Inferior Parietal Cortex". Cerebral Cortex. 15 (11): 1779–90. doi:10.1093/cercor/bhi055. PMID 15716474.
Authority control: National
• Germany
• Japan
|
Wikipedia
|
Speed prior
The speed prior is a complexity measure similar to Kolmogorov complexity, except that it is based on computation speed as well as program length.[1] The speed prior complexity of a program is its size in bits plus the logarithm of the maximum time we are willing to run it to get a prediction.
When compared to traditional measures, use of the Speed Prior has the disadvantage of leading to less optimal predictions, and the advantage of providing computable predictions.
See also
• Computational complexity theory
• Inductive inference
• Minimum message length
• Minimum description length
References
1. Schmidhuber, J. (2002) The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. In J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002). Lecture Notes in Artificial Intelligence, pages 216--228. Springer.
External links
• Speed Prior web site
|
Wikipedia
|
John Speidell
John Speidell (fl. 1600–1634) was an English mathematician. He is known for his early work on the calculation of logarithms.
Speidell was a mathematics teacher in London[1][2] and one of the early followers of the work John Napier had previously done on natural logarithms.[3] In 1619 Speidell published a table entitled "New Logarithmes" in which he calculated the natural logarithms of sines, tangents, and secants.[4][5]
He then diverged from Napier's methods in order to ensure all of the logarithms were positive.[6] A new edition of "New Logarithmes" was published in 1622 and contained an appendix with the natural logarithms of all numbers 1-1000.[7]
Along with William Oughtred and Richard Norwood, Speidell helped push toward the abbreviations of trigonometric functions.[7]
Speidel published a number of work about mathematics, including An Arithmeticall Extraction in 1628.[8] His son, Euclid Speidell, also published mathematics texts.[9]
References
1. John Aubrey; Andrew Clark (1898). 'Brief Lives': I-Y. At the Clarendon Press. pp. 230–231.
2. Kerry Downes; John F. Bold; Edward Chaney (1993). English Architecture Public & Private: Essays for Kerry Downes. A&C Black. pp. 28–. ISBN 978-1-85285-095-1.
3. E. W. Hobson (29 March 2012). John Napier and the Invention of Logarithms, 1614: A Lecture by E.W. Hobson. Cambridge University Press. pp. 43–. ISBN 978-1-107-62450-4.
4. Charles Hutton (1785). Mathematical Tables, Containing Common, Hyperbolic and Logistic Logarithms, Also Sines Tangents, Secants and Versed Sines, Both Natural and Logarithmic. Robinson and Baldwin. pp. 30–.
5. Florian Cajori (26 September 2013). A History of Mathematical Notations. Courier Corporation. pp. 1–. ISBN 978-0-486-16116-7.
6. Sir David Brewster (1819). Second American edition of the new Edinburgh encyclopædia. Published by Samuel Whiting and John L. Tiffany; also, by N. Whiting, New-Haven; A. Seward, Utica; S. Parker, Philadelphia; Wm. Snodgrass, Natchez; and I. Clizbe, New-Orleans 1819. pp. 112–.
7. Florian Cajori (1893). A History of Mathematics. Macmillan & Company. pp. 165–.
8. Augustus De Morgan (1847). Arithmetical Books from the Invention of Printing to the Present Time: Being Brief Notices of a Large Number of Works Drawn Up from Actual Inspection. Taylor and Walton. pp. 37–.
9. Beeley, Philip (June 2019). "Practical mathematicians and mathematical practice in later seventeenth-century London". The British Journal for the History of Science. 52 (2): 225–248. doi:10.1017/S0007087419000207. PMID 31198123.
Authority control
International
• ISNI
• VIAF
National
• Germany
• United States
Academics
• zbMATH
|
Wikipedia
|
Spence's function
In mathematics, Spence's function, or dilogarithm, denoted as Li2(z), is a particular case of the polylogarithm. Two related special functions are referred to as Spence's function, the dilogarithm itself:
$\operatorname {Li} _{2}(z)=-\int _{0}^{z}{\ln(1-u) \over u}\,du{\text{, }}z\in \mathbb {C} $
See also: polylogarithm § Dilogarithm
and its reflection. For |z| < 1, an infinite series also applies (the integral definition constitutes its analytical extension to the complex plane):
$\operatorname {Li} _{2}(z)=\sum _{k=1}^{\infty }{z^{k} \over k^{2}}.$
Alternatively, the dilogarithm function is sometimes defined as
$\int _{1}^{v}{\frac {\ln t}{1-t}}dt=\operatorname {Li} _{2}(1-v).$
In hyperbolic geometry the dilogarithm can be used to compute the volume of an ideal simplex. Specifically, a simplex whose vertices have cross ratio z has hyperbolic volume
$D(z)=\operatorname {Im} \operatorname {Li} _{2}(z)+\arg(1-z)\log |z|.$
The function D(z) is sometimes called the Bloch-Wigner function.[1] Lobachevsky's function and Clausen's function are closely related functions.
William Spence, after whom the function was named by early writers in the field, was a Scottish mathematician working in the early nineteenth century.[2] He was at school with John Galt,[3] who later wrote a biographical essay on Spence.
Analytic structure
Using the former definition above, the dilogarithm function is analytic everywhere on the complex plane except at $z=1$, where it has a logarithmic branch point. The standard choice of branch cut is along the positive real axis $(1,\infty )$. However, the function is continuous at the branch point and takes on the value $\operatorname {Li} _{2}(1)=\pi ^{2}/6$.
Identities
$\operatorname {Li} _{2}(z)+\operatorname {Li} _{2}(-z)={\frac {1}{2}}\operatorname {Li} _{2}(z^{2}).$[4]
$\operatorname {Li} _{2}(1-z)+\operatorname {Li} _{2}\left(1-{\frac {1}{z}}\right)=-{\frac {(\ln z)^{2}}{2}}.$[5]
$\operatorname {Li} _{2}(z)+\operatorname {Li} _{2}(1-z)={\frac {{\pi }^{2}}{6}}-\ln z\cdot \ln(1-z).$[4]
$\operatorname {Li} _{2}(-z)-\operatorname {Li} _{2}(1-z)+{\frac {1}{2}}\operatorname {Li} _{2}(1-z^{2})=-{\frac {{\pi }^{2}}{12}}-\ln z\cdot \ln(z+1).$[5]
$\operatorname {Li} _{2}(z)+\operatorname {Li} _{2}\left({\frac {1}{z}}\right)=-{\frac {\pi ^{2}}{6}}-{\frac {(\ln(-z))^{2}}{2}}.$[4]
Particular value identities
$\operatorname {Li} _{2}\left({\frac {1}{3}}\right)-{\frac {1}{6}}\operatorname {Li} _{2}\left({\frac {1}{9}}\right)={\frac {{\pi }^{2}}{18}}-{\frac {(\ln 3)^{2}}{6}}.$[5]
$\operatorname {Li} _{2}\left(-{\frac {1}{3}}\right)-{\frac {1}{3}}\operatorname {Li} _{2}\left({\frac {1}{9}}\right)=-{\frac {{\pi }^{2}}{18}}+{\frac {(\ln 3)^{2}}{6}}.$[5]
$\operatorname {Li} _{2}\left(-{\frac {1}{2}}\right)+{\frac {1}{6}}\operatorname {Li} _{2}\left({\frac {1}{9}}\right)=-{\frac {{\pi }^{2}}{18}}+\ln 2\cdot \ln 3-{\frac {(\ln 2)^{2}}{2}}-{\frac {(\ln 3)^{2}}{3}}.$[5]
$\operatorname {Li} _{2}\left({\frac {1}{4}}\right)+{\frac {1}{3}}\operatorname {Li} _{2}\left({\frac {1}{9}}\right)={\frac {{\pi }^{2}}{18}}+2\ln 2\cdot \ln 3-2(\ln 2)^{2}-{\frac {2}{3}}(\ln 3)^{2}.$ [5]
$\operatorname {Li} _{2}\left(-{\frac {1}{8}}\right)+\operatorname {Li} _{2}\left({\frac {1}{9}}\right)=-{\frac {1}{2}}\left(\ln {\frac {9}{8}}\right)^{2}.$[5]
$36\operatorname {Li} _{2}\left({\frac {1}{2}}\right)-36\operatorname {Li} _{2}\left({\frac {1}{4}}\right)-12\operatorname {Li} _{2}\left({\frac {1}{8}}\right)+6\operatorname {Li} _{2}\left({\frac {1}{64}}\right)={\pi }^{2}.$
Special values
$\operatorname {Li} _{2}(-1)=-{\frac {{\pi }^{2}}{12}}.$
$\operatorname {Li} _{2}(0)=0.$
$\operatorname {Li} _{2}\left({\frac {1}{2}}\right)={\frac {{\pi }^{2}}{12}}-{\frac {(\ln 2)^{2}}{2}}.$
$\operatorname {Li} _{2}(1)=\zeta (2)={\frac {{\pi }^{2}}{6}},$ where $\zeta (s)$ is the Riemann zeta function.
$\operatorname {Li} _{2}(2)={\frac {{\pi }^{2}}{4}}-i\pi \ln 2.$
${\begin{aligned}\operatorname {Li} _{2}\left(-{\frac {{\sqrt {5}}-1}{2}}\right)&=-{\frac {{\pi }^{2}}{15}}+{\frac {1}{2}}\left(\ln {\frac {{\sqrt {5}}+1}{2}}\right)^{2}\\&=-{\frac {{\pi }^{2}}{15}}+{\frac {1}{2}}\operatorname {arcsch} ^{2}2.\end{aligned}}$
${\begin{aligned}\operatorname {Li} _{2}\left(-{\frac {{\sqrt {5}}+1}{2}}\right)&=-{\frac {{\pi }^{2}}{10}}-\ln ^{2}{\frac {{\sqrt {5}}+1}{2}}\\&=-{\frac {{\pi }^{2}}{10}}-\operatorname {arcsch} ^{2}2.\end{aligned}}$
${\begin{aligned}\operatorname {Li} _{2}\left({\frac {3-{\sqrt {5}}}{2}}\right)&={\frac {{\pi }^{2}}{15}}-\ln ^{2}{\frac {{\sqrt {5}}+1}{2}}\\&={\frac {{\pi }^{2}}{15}}-\operatorname {arcsch} ^{2}2.\end{aligned}}$
${\begin{aligned}\operatorname {Li} _{2}\left({\frac {{\sqrt {5}}-1}{2}}\right)&={\frac {{\pi }^{2}}{10}}-\ln ^{2}{\frac {{\sqrt {5}}+1}{2}}\\&={\frac {{\pi }^{2}}{10}}-\operatorname {arcsch} ^{2}2.\end{aligned}}$
In particle physics
Spence's Function is commonly encountered in particle physics while calculating radiative corrections. In this context, the function is often defined with an absolute value inside the logarithm:
$\operatorname {\Phi } (x)=-\int _{0}^{x}{\frac {\ln |1-u|}{u}}\,du={\begin{cases}\operatorname {Li} _{2}(x),&x\leq 1;\\{\frac {\pi ^{2}}{3}}-{\frac {1}{2}}(\ln x)^{2}-\operatorname {Li} _{2}({\frac {1}{x}}),&x>1.\end{cases}}$
See also
• Markstein number
Notes
1. Zagier p. 10
2. "William Spence - Biography".
3. "Biography – GALT, JOHN – Volume VII (1836-1850) – Dictionary of Canadian Biography".
4. Zagier
5. Weisstein, Eric W. "Dilogarithm". MathWorld.
References
• Lewin, L. (1958). Dilogarithms and associated functions. Foreword by J. C. P. Miller. London: Macdonald. MR 0105524.
• Morris, Robert (1979). "The dilogarithm function of a real argument". Math. Comp. 33 (146): 778–787. doi:10.1090/S0025-5718-1979-0521291-X. MR 0521291.
• Loxton, J. H. (1984). "Special values of the dilogarithm". Acta Arith. 18 (2): 155–166. doi:10.4064/aa-43-2-155-166. MR 0736728.
• Kirillov, Anatol N. (1995). "Dilogarithm identities". Progress of Theoretical Physics Supplement. 118: 61–142. arXiv:hep-th/9408113. Bibcode:1995PThPS.118...61K. doi:10.1143/PTPS.118.61. S2CID 119177149.
• Osacar, Carlos; Palacian, Jesus; Palacios, Manuel (1995). "Numerical evaluation of the dilogarithm of complex argument". Celest. Mech. Dyn. Astron. 62 (1): 93–98. Bibcode:1995CeMDA..62...93O. doi:10.1007/BF00692071. S2CID 121304484.
• Zagier, Don (2007). "The Dilogarithm Function". In Pierre Cartier; Pierre Moussa; Bernard Julia; Pierre Vanhove (eds.). Frontiers in Number Theory, Physics, and Geometry II (PDF). pp. 3–65. doi:10.1007/978-3-540-30308-4_1. ISBN 978-3-540-30308-4.
Further reading
• Bloch, Spencer J. (2000). Higher regulators, algebraic K-theory, and zeta functions of elliptic curves. CRM Monograph Series. Vol. 11. Providence, RI: American Mathematical Society. ISBN 0-8218-2114-8. Zbl 0958.19001.
External links
• NIST Digital Library of Mathematical Functions: Dilogarithm
• Weisstein, Eric W. "Dilogarithm". MathWorld.
|
Wikipedia
|
Spencer Bloch
Spencer Janney Bloch (born May 22, 1944; New York City[1]) is an American mathematician known for his contributions to algebraic geometry and algebraic K-theory. Bloch is a R. M. Hutchins Distinguished Service Professor Emeritus in the Department of Mathematics of the University of Chicago. He is a member of the U.S. National Academy of Sciences[2] and a Fellow of the American Academy of Arts and Sciences[3][4] and of the American Mathematical Society.[5] At the International Congress of Mathematicians he gave an invited lecture in 1978[6] and a plenary lecture in 1990.[4][7] He was a visiting scholar at the Institute for Advanced Study in 1981–82.[8] He received a Humboldt Prize in 1996.[9] He also received a 2021 Leroy P. Steele Prize for Lifetime Achievement.[10]
Spencer Bloch
Bloch at Oberwolfach in 2004
Born (1944-05-22) May 22, 1944
New York City
Alma materHarvard College
Columbia University
Known forBloch–Kato conjectures
Scientific career
FieldsMathematics
InstitutionsUniversity of Chicago
Doctoral advisorSteven Kleiman
Doctoral students
• Caterina Consani
• Steven Zucker
See also
• Bloch's formula
• Bloch group
• Bloch–Kato conjecture
• Bloch's higher Chow group
References
1. Spencer Bloch CV, Department of Mathematics, University of Chicago. Accessed January 12, 2010
2. Bloch, Spencer J. U.S. National Academy of Sciences. Accessed January 12, 2010. Election Citation: "Bloch has done pioneering work in the application of higher algebraic K-theory to algebraic geometry, particularly in problems related to algebraic cycles, and is regarded as the world's leader in this field. His work has firmly established higher K-theory as a fundamental tool in algebraic geometry."
3. American Academy of Arts & Sciences, NEWLY ELECTED MEMBERS, APRIL 2009 Archived August 8, 2013, at the Wayback Machine, American Academy of Arts and Sciences. Accessed January 12, 2010
4. Scholars, visiting faculty, leaders represent Chicago as AAAS fellows, The University of Chicago Chronicle, April 30, 2009, Vol. 28 No. 15. Accessed January 12, 2010
5. List of Fellows of the American Mathematical Society, retrieved November 10, 2012.
6. Bloch, S. (1978). "Algebraic K-theory and zeta functions of elliptic curves". In: Proceedings of the International Congress of Mathematicians (Helsinki, 1978). pp. 511–515.
7. Bloch, S. (1991). "Algebraic K-theory, motives, and algebraic cycles". In: Proceedings of the International Congress of Mathematicians, August 21–29, 1990, Kyoto, Japan. Mathematical Society of Japan. pp. 43–54.
8. Institute for Advanced Study: A Community of Scholars
9. Annual Report of the Provost, 1995–96, University of Chicago. Accessed January 12, 2010.
10. American Mathematical Society Announcement, November 19, 2020. Accessed November 25, 2020.
• James D. Lewis, and Rob de Jeu (Editors), Motives and Algebraic Cycles: A Celebration in Honour of Spencer J. Bloch. Fields Institute Communications series, 2009, American Mathematical Society. ISBN 0-8218-4494-6
External links
• Spencer Bloch personal webpage, Department of Mathematics, University of Chicago
• Spencer Bloch at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
Spencer cohomology
In mathematics, Spencer cohomology is cohomology of a manifold with coefficients in the sheaf of solutions of a linear partial differential operator. It was introduced by Donald C. Spencer in 1969.
References
• Lychagin, Valentin (2001) [1994], "Spencer cohomology", Encyclopedia of Mathematics, EMS Press
• Spencer, D. C. (1969), "Overdetermined systems of linear partial differential equations", Bulletin of the American Mathematical Society, 75 (2): 179–239, doi:10.1090/S0002-9904-1969-12129-4, ISSN 0002-9904, MR 0242200
|
Wikipedia
|
Sperner's lemma
In mathematics, Sperner's lemma is a combinatorial result on colorings of triangulations, analogous to the Brouwer fixed point theorem, which is equivalent to it.[1] It states that every Sperner coloring (described below) of a triangulation of an $n$-dimensional simplex contains a cell whose vertices all have different colors.
For the theorem in extremal set theory, see Sperner's theorem.
The initial result of this kind was proved by Emanuel Sperner, in relation with proofs of invariance of domain. Sperner colorings have been used for effective computation of fixed points and in root-finding algorithms, and are applied in fair division (cake cutting) algorithms.
According to the Soviet Mathematical Encyclopaedia (ed. I.M. Vinogradov), a related 1929 theorem (of Knaster, Borsuk and Mazurkiewicz) had also become known as the Sperner lemma – this point is discussed in the English translation (ed. M. Hazewinkel). It is now commonly known as the Knaster–Kuratowski–Mazurkiewicz lemma.
Statement
One-dimensional case
In one dimension, Sperner's Lemma can be regarded as a discrete version of the intermediate value theorem. In this case, it essentially says that if a discrete function takes only the values 0 and 1, begins at the value 0 and ends at the value 1, then it must switch values an odd number of times.
Two-dimensional case
The two-dimensional case is the one referred to most frequently. It is stated as follows:
Subdivide a triangle ABC arbitrarily into a triangulation consisting of smaller triangles meeting edge to edge. Then a Sperner coloring of the triangulation is defined as an assignment of three colors to the vertices of the triangulation such that
1. Each of the three vertices A, B, and C of the initial triangle has a distinct color
2. The vertices that lie along any edge of triangle ABC have only two colors, the two colors at the endpoints of the edge. For example, each vertex on AC must have the same color as A or C.
Then every Sperner coloring of every triangulation has at least one "rainbow triangle", a smaller triangle in the triangulation that has its vertices colored with all three different colors. More precisely, there must be an odd number of rainbow triangles.
Multidimensional case
In the general case the lemma refers to a n-dimensional simplex:
${\mathcal {A}}=A_{1}A_{2}\ldots A_{n+1}.$
Consider any triangulation T, a disjoint division of ${\mathcal {A}}$ into smaller n-dimensional simplices, again meeting face-to-face. Denote the coloring function as:
$f:S\to \{1,2,3,\dots ,n,n+1\},$
where S is the set of vertices of T. A coloring function defines a Sperner coloring when:
1. The vertices of the large simplex are colored with different colors, that is, without loss of generality, f(Ai) = i for 1 ≤ i ≤ n + 1.
2. Vertices of T located on any k-dimensional subface of the large simplex
$A_{i_{1}}A_{i_{2}}\ldots A_{i_{k+1}}$
are colored only with the colors
$i_{1},i_{2},\ldots ,i_{k+1}.$
Then every Sperner coloring of every triangulation of the n-dimensional simplex has an odd number of instances of a rainbow simplex, meaning a simplex whose vertices are colored with all n + 1 colors. In particular, there must be at least one rainbow simplex.
Proof
We shall first address the two-dimensional case. Consider a graph G built from the triangulation T as follows:
The vertices of G are the members of T plus the area outside the triangle. Two vertices are connected with an edge if their corresponding areas share a common border with one endpoint colored 1 and the other colored 2.
Note that on the interval AB there is an odd number of borders colored 1-2 (simply because A is colored 1, B is colored 2; and as we move along AB, there must be an odd number of color changes in order to get different colors at the beginning and at the end). Therefore, the vertex of G corresponding to the outer area has an odd degree. But it is known (the handshaking lemma) that in a finite graph there is an even number of vertices with odd degree. Therefore, the remaining graph, excluding the outer area, has an odd number of vertices with odd degree corresponding to members of T.
It can be easily seen that the only possible degree of a triangle from T is 0, 1, or 2, and that the degree 1 corresponds to a triangle colored with the three colors 1, 2, and 3.
Thus we have obtained a slightly stronger conclusion, which says that in a triangulation T there is an odd number (and at least one) of full-colored triangles.
A multidimensional case can be proved by induction on the dimension of a simplex. We apply the same reasoning, as in the two-dimensional case, to conclude that in a n-dimensional triangulation there is an odd number of full-colored simplices.
Commentary
Here is an elaboration of the proof given previously, for a reader new to graph theory.
This diagram numbers the colors of the vertices of the example given previously. The small triangles whose vertices all have different numbers are shaded in the graph. Each small triangle becomes a node in the new graph derived from the triangulation. The small letters identify the areas, eight inside the figure, and area i designates the space outside of it.
As described previously, those nodes that share an edge whose endpoints are numbered 1 and 2 are joined in the derived graph. For example, node d shares an edge with the outer area i, and its vertices all have different numbers, so it is also shaded. Node b is not shaded because two vertices have the same number, but it is joined to the outer area.
One could add a new full-numbered triangle, say by inserting a node numbered 3 into the edge between 1 and 1 of node a, and joining that node to the other vertex of a. Doing so would have to create a pair of new nodes, like the situation with nodes f and g.
Computing a Sperner simplex
Suppose there is a d-dimensional simplex of side-length N, and it is triangulated into sub-simplices of side-length 1. There is a function that, given any vertex of the triangulation, returns its color. The coloring is guaranteed to satisfy Sperner's boundary condition. How many times do we have to call the function in order to find a rainbow simplex? Obviously, we can go over all the triangulation vertices, whose number is O(Nd), which is polynomial in N when the dimension is fixed. But, can it be done in time ?
This problem was first studied by Christos Papadimitriou. He introduced a complexity class called PPAD, which contains this as well as related problems (such as finding a Brouwer fixed point). He proved that finding a Sperner simplex is PPAD-complete even for d=3. Some 15 years later, Chen and Deng proved PPAD-completeness even for d=2.[2] It is believed that PPAD-hard problems cannot be solved in time O(poly(log N)).
Generalizations
Subsets of labels
Suppose that each vertex of the triangulation may be labeled with multiple colors, so that the coloring function is F : S → 2[n+1].
For every sub-simplex, the set of labelings on its vertices is a set-family over the set of colors [n + 1]. This set-family can be seen as a hypergraph.
If, for every vertex v on a face of the simplex, the colors in f(v) are a subset of the set of colors on the face endpoints, then there exists a sub-simplex with a balanced labeling – a labeling in which the corresponding hypergraph admits a perfect fractional matching. To illustrate, here are some balanced labeling examples for n = 2:
• ({1}, {2}, {3}) - balanced by the weights (1, 1, 1).
• ({1,2}, {2,3}, {3,1}) - balanced by the weights (1/2, 1/2, 1/2).
• ({1,2}, {2,3}, {1}) - balanced by the weights (0, 1, 1).
This was proved by Shapley in 1973.[3] It is a combinatorial analogue of the KKMS lemma.
Polytopal variants
Suppose that we have a d-dimensional polytope P with n vertices. P is triangulated, and each vertex of the triangulation is labeled with a label from {1, …, n}. Every main vertex i is labeled i. A sub-simplex is called fully-labeled if it is d-dimensional, and each of its d + 1 vertices has a different label. If every vertex in a face F of P is labeled with one of the labels on the endpoints of F, then there are at least n – d fully-labeled simplices. Some special cases are:
• d = n – 1. In this case, P is a simplex. The polytopal Sperner lemma guarantees that there is at least 1 fully-labeled simplex. That is, it reduces to Sperner's lemma.
• d = 2. Suppose a two-dimensional polygon with n vertices is triangulated and labeled using the labels 1, …, n such that, on each face between vertex i and vertex i + 1 (mod n), only the labels i and i + 1 are used. Then, there are at least n – 2 sub-triangles in which three different labels are used.
The general statement was conjectured by Atanassov in 1996, who proved it for the case d = 2.[4] The proof of the general case was first given by de Loera, Peterson, and Su in 2002.[5] They provide two proofs: the first is non-constructive and uses the notion of pebble sets; the second is constructive and is based on arguments of following paths in graphs.
Meunier[6] extended the theorem from polytopes to polytopal bodies, which need not be convex or simply-connected. In particular, if P is a polytope, then the set of its faces is a polytopal body. In every Sperner labeling of a polytopal body with vertices v1, …, vn, there are at least:
$n-d-1+\left\lceil {\frac {\min _{i=1}^{n}\deg _{B(P)}(v_{i})}{d}}\right\rceil $
fully-labeled simplices such that any pair of these simplices receives two different labelings. The degree degB(P)(vi) is the number of edges of B(P) to which vi belongs. Since the degree is at least d, the lower bound is at least n – d. But it can be larger. For example, for the cyclic polytope in 4 dimensions with n vertices, the lower bound is:
$n-4-1+\left\lceil {\frac {n-1}{4}}\right\rceil \approx {\frac {5}{4}}n.$
Musin[7] further extended the theorem to d-dimensional piecewise-linear manifolds, with or without a boundary.
Asada, Frick, Pisharody, Polevy, Stoner, Tsang and Wellner[8] further exended the theorem to pseudomanifolds with boundary, and improved the lower bound on the number of facets with pairwise-distinct labels.
Cubic variants
Suppose that, instead of a simplex triangulated into sub-simplices, we have an n-dimensional cube partitioned into smaller n-dimensional cubes.
Harold W. Kuhn[9] proved the following lemma. Suppose the cube [0,M]n, for some integer M, is partitioned into Mn unit cubes. Suppose each vertex of the partition is labeled with a label from {1, …, n + 1}, such that for every vertex v: (1) if vi = 0 then the label on v is at most i; (2) if vi=M then the label on v is not i. Then there exists a unit cube with all the labels {1, …, n + 1} (some of them more than once). The special case n = 2 is: suppose a square is partitioned into sub-squares, and each vertex is labeled with a label from {1,2,3}. The left edge is labeled with 1 (= at most 1); the bottom edge is labeled with 1 or 2 (= at most 2); the top edge is labeled with 1 or 3 (= not 2); and the right edge is labeled with 2 or 3 (= not 1). Then there is a square labeled with 1,2,3.
Another variant, related to the Poincaré–Miranda theorem,[10] is as follows. Suppose the cube [0,M]n is partitioned into Mn unit cubes. Suppose each vertex is labeled with a binary vector of length n, such that for every vertex v: (1) if vi = 0 then the coordinate i of label on v is 0; (2) if vi = M then coordinate i of the label on v is 1; (3) if two vertices are neighbors, then their labels differ by at most one coordinate. Then there exists a unit cube in which all 2n labels are different. In two dimensions, another way to formulate this theorem is:[11] in any labeling that satisfies conditions (1) and (2), there is at least one cell in which the sum of labels is 0 [a 1-dimensional cell with (1,1) and (-1,-1) labels, or a 2-dimensional cells with all four different labels].
Wolsey[12] strengthened these two results by proving that the number of completely-labeled cubes is odd.
Musin[11] extended these results to general quadrangulations.
Rainbow variants
Suppose that, instead of a single labeling, we have n different Sperner labelings. We consider pairs (simplex, permutation) such that, the label of each vertex of the simplex is chosen from a different labeling (so for each simplex, there are n! different pairs). Then there are at least n! fully labeled pairs. This was proved by Ravindra Bapat[13] for any triangulation. A simpler proof, which only works for specific triangulations, was presented later by Su.[14]
Another way to state this lemma is as follows. Suppose there are n people, each of whom produces a different Sperner labeling of the same triangulation. Then, there exists a simplex, and a matching of the people to its vertices, such that each vertex is labeled by its owner differently (one person labels its vertex by 1, another person labels its vertex by 2, etc.). Moreover, there are at least n! such matchings. This can be used to find an envy-free cake-cutting with connected pieces.
Asada, Frick, Pisharody, Polevy, Stoner, Tsang and Wellner[8] extended this theorem to pseudomanifolds with boundary.
More generally, suppose we have m different Sperner labelings, where m may be different than n. Then:[15]: Thm 2.1
1. For any positive integers k1, …, km whose sum is m + n – 1, there exists a baby-simplex on which, for every i ∈ {1, …, m}, labeling number i uses at least ki (out of n) distinct labels. Moreover, each label is used by at least one (out of m) labeling.
2. For any positive integers I1, …, Imwhose sum is m + n – 1, there exists a baby-simplex on which, for every j ∈ {1, …, n},, the label j is used by at least lj (out of m) different labelings.
Both versions reduce to Sperner's lemma when m = 1, or when all m labelings are identical.
See [16] for similar generalizations.
Oriented variants
SequenceDegree
1231 (one 1-2 switch and no 2-1 switch)
123210 (one 1-2 switch minus one 2-1 switch)
12320 (as above; recall sequence is cyclic)
12312312 (two 1-2 switches and no 2-1 switch)
Brown and Cairns[17] strengthened Sperner's lemma by considering the orientation of simplices. Each sub-simplex has an orientation that can be either +1 or -1 (if it is fully-labeled), or 0 (if it is not fully-labeled). They proved that the sum of all orientations of simplices is +1. In particular, this implies that there is an odd number of fully-labeled simplices.
As an example for n = 3, suppose a triangle is triangulated and labeled with {1,2,3}. Consider the cyclic sequence of labels on the boundary of the triangle. Define the degree of the labeling as the number of switches from 1 to 2, minus the number of switches from 2 to 1. See examples in the table at the right. Note that the degree is the same if we count switches from 2 to 3 minus 3 to 2, or from 3 to 1 minus 1 to 3.
Musin proved that the number of fully labeled triangles is at least the degree of the labeling.[18] In particular, if the degree is nonzero, then there exists at least one fully labeled triangle.
If a labeling satisfies the Sperner condition, then its degree is exactly 1: there are 1-2 and 2-1 switches only in the side between vertices 1 and 2, and the number of 1-2 switches must be one more than the number of 2-1 switches (when walking from vertex 1 to vertex 2). Therefore, the original Sperner lemma follows from Musin's theorem.
Trees and cycles
There is a similar lemma about finite and infinite trees and cycles.[19]
Related results
Mirzakhani and Vondrak[20] study a weaker variant of a Sperner labeling, in which the only requirement is that label i is not used on the face opposite to vertex i. They call it Sperner-admissible labeling. They show that there are Sperner-admissible labelings in which every cell contains at most 4 labels. They also prove an optimal lower bound on the number of cells that must have at least two different labels in each Sperner-admissible labeling. They also prove that, for any Sperner-admissible partition of the regular simplex, the total area of the boundary between the parts is minimized by the Voronoi partition.
Applications
Sperner colorings have been used for effective computation of fixed points. A Sperner coloring can be constructed such that fully labeled simplices correspond to fixed points of a given function. By making a triangulation smaller and smaller, one can show that the limit of the fully labeled simplices is exactly the fixed point. Hence, the technique provides a way to approximate fixed points. A related application is the numerical detection of periodic orbits and symbolic dynamics.[21] Sperner's lemma can also be used in root-finding algorithms and fair division algorithms; see Simmons–Su protocols.
Sperner's lemma is one of the key ingredients of the proof of Monsky's theorem, that a square cannot be cut into an odd number of equal-area triangles.[22]
Sperner's lemma can be used to find a competitive equilibrium in an exchange economy, although there are more efficient ways to find it.[23]: 67
Fifty years after first publishing it, Sperner presented a survey on the development, influence and applications of his combinatorial lemma.[24]
Equivalent results
There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in the top row can be deduced from the one below it in the same column.[25]
Algebraic topologyCombinatoricsSet covering
Brouwer fixed-point theoremSperner's lemmaKnaster–Kuratowski–Mazurkiewicz lemma
Borsuk–Ulam theoremTucker's lemmaLusternik–Schnirelmann theorem
See also
• Topological combinatorics
References
1. Flegg, H. Graham (1974). From Geometry to Topology. London: English University Press. pp. 84–89. ISBN 0-340-05324-0.
2. Chen, Xi; Deng, Xiaotie (2009-10-17). "On the complexity of 2D discrete fixed point problem". Theoretical Computer Science. Automata, Languages and Programming (ICALP 2006). 410 (44): 4448–4456. doi:10.1016/j.tcs.2009.07.052. ISSN 0304-3975. S2CID 2831759.
3. Shapley, L. S. (1973-01-01), Hu, T. C.; Robinson, Stephen M. (eds.), "On Balanced Games without Side Payments", Mathematical Programming, Academic Press, pp. 261–290, ISBN 978-0-12-358350-5, retrieved 2020-06-29
4. Atanassov, K. T. (1996), "On Sperner's lemma", Studia Scientiarum Mathematicarum Hungarica, 32 (1–2): 71–74, MR 1405126
5. De Loera, Jesus A.; Peterson, Elisha; Su, Francis Edward (2002), "A polytopal generalization of Sperner's lemma", Journal of Combinatorial Theory, Series A, 100 (1): 1–26, doi:10.1006/jcta.2002.3274, MR 1932067
6. Meunier, Frédéric (2006-10-01). "Sperner labellings: A combinatorial approach". Journal of Combinatorial Theory. Series A. 113 (7): 1462–1475. doi:10.1016/j.jcta.2006.01.006. ISSN 0097-3165.
7. Musin, Oleg R. (2015-05-01). "Extensions of Sperner and Tucker's lemma for manifolds". Journal of Combinatorial Theory. Series A. 132: 172–187. doi:10.1016/j.jcta.2014.12.001. ISSN 0097-3165. S2CID 5699192.
8. Asada, Megumi; Frick, Florian; Pisharody, Vivek; Polevy, Maxwell; Stoner, David; Tsang, Ling Hei; Wellner, Zoe (2018-01-01). "Fair Division and Generalizations of Sperner- and KKM-type Results". SIAM Journal on Discrete Mathematics. 32 (1): 591–610. arXiv:1701.04955. doi:10.1137/17M1116210. ISSN 0895-4801. S2CID 43932757.
9. Kuhn, H. W. (1960), "Some Combinatorial Lemmas in Topology", IBM Journal of Research and Development, 4 (5): 518–524, doi:10.1147/rd.45.0518
10. Michael Müger (2016), Topology for the working mathematician (PDF), Draft
11. Musin, Oleg R. (2015), "Sperner type lemma for quadrangulations", Moscow Journal of Combinatorics and Number Theory, 5 (1–2): 26–35, arXiv:1406.5082, MR 3476207
12. Wolsey, Laurence A (1977-07-01). "Cubical sperner lemmas as applications of generalized complementary pivoting". Journal of Combinatorial Theory. Series A. 23 (1): 78–87. doi:10.1016/0097-3165(77)90081-4. ISSN 0097-3165.
13. Bapat, R. B. (1989). "A constructive proof of a permutation-based generalization of Sperner's lemma". Mathematical Programming. 44 (1–3): 113–120. doi:10.1007/BF01587081. S2CID 5325605.
14. Su, F. E. (1999). "Rental Harmony: Sperner's Lemma in Fair Division". The American Mathematical Monthly. 106 (10): 930–942. doi:10.2307/2589747. JSTOR 2589747.
15. Meunier, Frédéric; Su, Francis Edward (2019). "Multilabeled versions of Sperner's and Fan's lemmas and applications". SIAM Journal on Applied Algebra and Geometry. 3 (3): 391–411. arXiv:1801.02044. doi:10.1137/18M1192548. S2CID 3762597.
16. Asada, Megumi; Frick, Florian; Pisharody, Vivek; Polevy, Maxwell; Stoner, David; Tsang, Ling Hei; Wellner, Zoe (2018). "SIAM (Society for Industrial and Applied Mathematics)". SIAM Journal on Discrete Mathematics. 32: 591–610. arXiv:1701.04955. doi:10.1137/17m1116210. S2CID 43932757.
17. Brown, A. B.; Cairns, S. S. (1961-01-01). "Strengthening of Sperner's Lemma Applied to Homology Theory". Proceedings of the National Academy of Sciences. 47 (1): 113–114. Bibcode:1961PNAS...47..113B. doi:10.1073/pnas.47.1.113. ISSN 0027-8424. PMC 285253. PMID 16590803.
18. Oleg R Musin (2014). "Around Sperner's lemma". arXiv:1405.7513 [math.CO].
19. Niedermaier, Andrew; Rizzolo, Douglas; Su, Francis Edward (2014), "A tree Sperner lemma", in Barg, Alexander; Musin, Oleg R. (eds.), Discrete Geometry and Algebraic Combinatorics, Contemporary Mathematics, vol. 625, Providence, RI: American Mathematical Society, pp. 77–92, arXiv:0909.0339, doi:10.1090/conm/625/12492, ISBN 9781470409050, MR 3289406, S2CID 115157240
20. Mirzakhani, Maryam; Vondrák, Jan (2017), Loebl, Martin; Nešetřil, Jaroslav; Thomas, Robin (eds.), "Sperner's Colorings and Optimal Partitioning of the Simplex", A Journey Through Discrete Mathematics: A Tribute to Jiří Matoušek, Cham: Springer International Publishing, pp. 615–631, arXiv:1611.08339, doi:10.1007/978-3-319-44479-6_25, ISBN 978-3-319-44479-6, S2CID 38668858, retrieved 2022-04-25
21. Gidea, Marian; Shmalo, Yitzchak (2018). "Combinatorial approach to detection of fixed points, periodic orbits, and symbolic dynamics". Discrete & Continuous Dynamical Systems - A. American Institute of Mathematical Sciences (AIMS). 38 (12): 6123–6148. doi:10.3934/dcds.2018264. ISSN 1553-5231. S2CID 119130905.
22. Aigner, Martin; Ziegler, Günter M. (2010), "One square and an odd number of triangles", Proofs from The Book (4th ed.), Berlin: Springer-Verlag, pp. 131–138, doi:10.1007/978-3-642-00856-6_20, ISBN 978-3-642-00855-9
23. Scarf, Herbert (1967). "The Core of an N Person Game". Econometrica. 35 (1): 50–69. doi:10.2307/1909383. JSTOR 1909383.
24. Sperner, Emanuel (1980), "Fifty years of further development of a combinatorial lemma", Numerical solution of highly nonlinear problems (Sympos. Fixed Point Algorithms and Complementarity Problems, Univ. Southampton, Southampton, 1979), North-Holland, Amsterdam-New York, pp. 183–197, 199–217, MR 0559121
25. Nyman, Kathryn L.; Su, Francis Edward (2013), "A Borsuk–Ulam equivalent that directly implies Sperner's lemma", The American Mathematical Monthly, 120 (4): 346–354, doi:10.4169/amer.math.monthly.120.04.346, JSTOR 10.4169/amer.math.monthly.120.04.346, MR 3035127
External links
• Proof of Sperner's Lemma at cut-the-knot
• Sperner's lemma and the Triangle Game, at the n-rich site.
• Sperner's lemma in 2D, a web-based game at itch.io.
|
Wikipedia
|
Sperner property of a partially ordered set
In order-theoretic mathematics, a graded partially ordered set is said to have the Sperner property (and hence is called a Sperner poset), if no antichain within it is larger than the largest rank level (one of the sets of elements of the same rank) in the poset.[1] Since every rank level is itself an antichain, the Sperner property is equivalently the property that some rank level is a maximum antichain.[2] The Sperner property and Sperner posets are named after Emanuel Sperner, who proved Sperner's theorem stating that the family of all subsets of a finite set (partially ordered by set inclusion) has this property. The lattice of partitions of a finite set typically lacks the Sperner property.[3]
Variations
A k-Sperner poset is a graded poset in which no union of k antichains is larger than the union of the k largest rank levels,[1] or, equivalently, the poset has a maximum k-family consisting of k rank levels.[2]
A strict Sperner poset is a graded poset in which all maximum antichains are rank levels.[2]
A strongly Sperner poset is a graded poset which is k-Sperner for all values of k up to the largest rank value.[2]
References
1. Stanley, Richard (1984), "Quotients of Peck posets", Order, 1 (1): 29–34, doi:10.1007/BF00396271, MR 0745587, S2CID 14857863.
2. Handbook of discrete and combinatorial mathematics, by Kenneth H. Rosen, John G. Michaels
3. Graham, R. L. (June 1978), "Maximum antichains in the partition lattice" (PDF), The Mathematical Intelligencer, 1 (2): 84–86, doi:10.1007/BF03023067, MR 0505555, S2CID 120190991
|
Wikipedia
|
Sperner's theorem
Sperner's theorem, in discrete mathematics, describes the largest possible families of finite sets none of which contain any other sets in the family. It is one of the central results in extremal set theory. It is named after Emanuel Sperner, who published it in 1928.
For the theorem about simplexes, see Sperner's lemma.
This result is sometimes called Sperner's lemma, but the name "Sperner's lemma" also refers to an unrelated result on coloring triangulations. To differentiate the two results, the result on the size of a Sperner family is now more commonly known as Sperner's theorem.
Statement
A family of sets in which none of the sets is a strict subset of another is called a Sperner family, or an antichain of sets, or a clutter. For example, the family of k-element subsets of an n-element set is a Sperner family. No set in this family can contain any of the others, because a containing set has to be strictly bigger than the set it contains, and in this family all sets have equal size. The value of k that makes this example have as many sets as possible is n/2 if n is even, or either of the nearest integers to n/2 if n is odd. For this choice, the number of sets in the family is ${\tbinom {n}{\lfloor n/2\rfloor }}$.
Sperner's theorem states that these examples are the largest possible Sperner families over an n-element set. Formally, the theorem states that,
1. for every Sperner family S whose union has a total of n elements, $|S|\leq {\binom {n}{\lfloor n/2\rfloor }},$ and
2. equality holds if and only if S consists of all subsets of an n-element set that have size $\lfloor n/2\rfloor $ or all that have size $\lceil n/2\rceil $.
Partial orders
Sperner's theorem can also be stated in terms of partial order width. The family of all subsets of an n-element set (its power set) can be partially ordered by set inclusion; in this partial order, two distinct elements are said to be incomparable when neither of them contains the other. The width of a partial order is the largest number of elements in an antichain, a set of pairwise incomparable elements. Translating this terminology into the language of sets, an antichain is just a Sperner family, and the width of the partial order is the maximum number of sets in a Sperner family. Thus, another way of stating Sperner's theorem is that the width of the inclusion order on a power set is ${\binom {n}{\lfloor n/2\rfloor }}$.
A graded partially ordered set is said to have the Sperner property when one of its largest antichains is formed by a set of elements that all have the same rank. In this terminology, Sperner's theorem states that the partially ordered set of all subsets of a finite set, partially ordered by set inclusion, has the Sperner property.
Proof
There are many proofs of Sperner's theorem, each leading to different generalizations (see Anderson (1987)).
The following proof is due to Lubell (1966). Let sk denote the number of k-sets in S. For all 0 ≤ k ≤ n,
${n \choose \lfloor {n/2}\rfloor }\geq {n \choose k}$
and, thus,
${s_{k} \over {n \choose \lfloor {n/2}\rfloor }}\leq {s_{k} \over {n \choose k}}.$
Since S is an antichain, we can sum over the above inequality from k = 0 to n and then apply the LYM inequality to obtain
$\sum _{k=0}^{n}{s_{k} \over {n \choose \lfloor {n/2}\rfloor }}\leq \sum _{k=0}^{n}{s_{k} \over {n \choose k}}\leq 1,$
which means
$|S|=\sum _{k=0}^{n}s_{k}\leq {n \choose \lfloor {n/2}\rfloor }.$
This completes the proof of part 1.
To have equality, all the inequalities in the preceding proof must be equalities. Since
${n \choose \lfloor {n/2}\rfloor }={n \choose k}$
if and only if $k=\lfloor {n/2}\rfloor $ or $\lceil {n/2}\rceil ,$ we conclude that equality implies that S consists only of sets of sizes $\lfloor {n/2}\rfloor $ or $\lceil {n/2}\rceil .$ For even n that concludes the proof of part 2.
For odd n there is more work to do, which we omit here because it is complicated. See Anderson (1987), pp. 3–4.
Generalizations
There are several generalizations of Sperner's theorem for subsets of ${\mathcal {P}}(E),$ the poset of all subsets of E.
No long chains
A chain is a subfamily $\{S_{0},S_{1},\dots ,S_{r}\}\subseteq {\mathcal {P}}(E)$ that is totally ordered, i.e., $S_{0}\subset S_{1}\subset \dots \subset S_{r}$ (possibly after renumbering). The chain has r + 1 members and length r. An r-chain-free family (also called an r-family) is a family of subsets of E that contains no chain of length r. Erdős (1945) proved that the largest size of an r-chain-free family is the sum of the r largest binomial coefficients ${\binom {n}{i}}$. The case r = 1 is Sperner's theorem.
p-compositions of a set
In the set ${\mathcal {P}}(E)^{p}$ of p-tuples of subsets of E, we say a p-tuple $(S_{1},\dots ,S_{p})$ is ≤ another one, $(T_{1},\dots ,T_{p}),$ if $S_{i}\subseteq T_{i}$ for each i = 1,2,...,p. We call $(S_{1},\dots ,S_{p})$ a p-composition of E if the sets $S_{1},\dots ,S_{p}$ form a partition of E. Meshalkin (1963) proved that the maximum size of an antichain of p-compositions is the largest p-multinomial coefficient ${\binom {n}{n_{1}\ n_{2}\ \dots \ n_{p}}},$ that is, the coefficient in which all ni are as nearly equal as possible (i.e., they differ by at most 1). Meshalkin proved this by proving a generalized LYM inequality.
The case p = 2 is Sperner's theorem, because then $S_{2}=E\setminus S_{1}$ and the assumptions reduce to the sets $S_{1}$ being a Sperner family.
No long chains in p-compositions of a set
Beck & Zaslavsky (2002) combined the Erdös and Meshalkin theorems by adapting Meshalkin's proof of his generalized LYM inequality. They showed that the largest size of a family of p-compositions such that the sets in the i-th position of the p-tuples, ignoring duplications, are r-chain-free, for every $i=1,2,\dots ,p-1$ (but not necessarily for i = p), is not greater than the sum of the $r^{p-1}$ largest p-multinomial coefficients.
Projective geometry analog
In the finite projective geometry PG(d, Fq) of dimension d over a finite field of order q, let ${\mathcal {L}}(p,F_{q})$ be the family of all subspaces. When partially ordered by set inclusion, this family is a lattice. Rota & Harper (1971) proved that the largest size of an antichain in ${\mathcal {L}}(p,F_{q})$ is the largest Gaussian coefficient ${\begin{bmatrix}d+1\\k\end{bmatrix}};$ this is the projective-geometry analog, or q-analog, of Sperner's theorem.
They further proved that the largest size of an r-chain-free family in ${\mathcal {L}}(p,F_{q})$ is the sum of the r largest Gaussian coefficients. Their proof is by a projective analog of the LYM inequality.
No long chains in p-compositions of a projective space
Beck & Zaslavsky (2003) obtained a Meshalkin-like generalization of the Rota–Harper theorem. In PG(d, Fq), a Meshalkin sequence of length p is a sequence $(A_{1},\ldots ,A_{p})$ of projective subspaces such that no proper subspace of PG(d, Fq) contains them all and their dimensions sum to $d-p+1$. The theorem is that a family of Meshalkin sequences of length p in PG(d, Fq), such that the subspaces appearing in position i of the sequences contain no chain of length r for each $i=1,2,\dots ,p-1,$ is not more than the sum of the largest $r^{p-1}$ of the quantities
${\begin{bmatrix}d+1\\n_{1}\ n_{2}\ \dots \ n_{p}\end{bmatrix}}q^{s_{2}(n_{1},\ldots ,n_{p})},$
where ${\begin{bmatrix}d+1\\n_{1}\ n_{2}\ \dots \ n_{p}\end{bmatrix}}$ (in which we assume that $d+1=n_{1}+\cdots +n_{p}$) denotes the p-Gaussian coefficient
${\begin{bmatrix}d+1\\n_{1}\end{bmatrix}}{\begin{bmatrix}d+1-n_{1}\\n_{2}\end{bmatrix}}\cdots {\begin{bmatrix}d+1-(n_{1}+\cdots +n_{p-1})\\n_{p}\end{bmatrix}}$
and
$s_{2}(n_{1},\ldots ,n_{p}):=n_{1}n_{2}+n_{1}n_{3}+n_{2}n_{3}+n_{1}n_{4}+\cdots +n_{p-1}n_{p},$
the second elementary symmetric function of the numbers $n_{1},n_{2},\dots ,n_{p}.$
See also
• Dilworth's theorem
• Erdős–Ko–Rado theorem
References
• Anderson, Ian (1987), Combinatorics of Finite Sets, Oxford University Press.
• Beck, Matthias; Zaslavsky, Thomas (2002), "A shorter, simpler, stronger proof of the Meshalkin-Hochberg-Hirsch bounds on componentwise antichains", Journal of Combinatorial Theory, Series A, 100 (1): 196–199, arXiv:math/0112068, doi:10.1006/jcta.2002.3295, MR 1932078, S2CID 8136773.
• Beck, Matthias; Zaslavsky, Thomas (2003), "A Meshalkin theorem for projective geometries", Journal of Combinatorial Theory, Series A, 102 (2): 433–441, arXiv:math/0112069, doi:10.1016/S0097-3165(03)00049-9, MR 1979545, S2CID 992137.
• Engel, Konrad (1997), Sperner theory, Encyclopedia of Mathematics and its Applications, vol. 65, Cambridge: Cambridge University Press, p. x+417, doi:10.1017/CBO9780511574719, ISBN 0-521-45206-6, MR 1429390.
• Engel, K. (2001) [1994], "Sperner theorem", Encyclopedia of Mathematics, EMS Press
• Erdős, P. (1945), "On a lemma of Littlewood and Offord" (PDF), Bulletin of the American Mathematical Society, 51 (12): 898–902, doi:10.1090/S0002-9904-1945-08454-7, MR 0014608
• Lubell, D. (1966), "A short proof of Sperner's lemma", Journal of Combinatorial Theory, 1 (2): 299, doi:10.1016/S0021-9800(66)80035-2, MR 0194348.
• Meshalkin, L.D. (1963), "Generalization of Sperner's theorem on the number of subsets of a finite set", Theory of Probability and Its Applications (in Russian), 8 (2): 203–204, doi:10.1137/1108023.
• Rota, Gian-Carlo; Harper, L. H. (1971), "Matching theory, an introduction", Advances in Probability and Related Topics, Vol. 1, New York: Dekker, pp. 169–215, MR 0282855.
• Sperner, Emanuel (1928), "Ein Satz über Untermengen einer endlichen Menge", Mathematische Zeitschrift (in German), 27 (1): 544–548, doi:10.1007/BF01171114, hdl:10338.dmlcz/127405, JFM 54.0090.06, S2CID 123451223.
External links
• Sperner's Theorem at cut-the-knot
• Sperner's theorem on the polymath1 wiki
|
Wikipedia
|
Sphenic number
In number theory, a sphenic number (from Greek: σφήνα, 'wedge') is a positive integer that is the product of three distinct prime numbers. Because there are infinitely many prime numbers, there are also infinitely many sphenic numbers.
Definition
A sphenic number is a product pqr where p, q, and r are three distinct prime numbers. In other words, the sphenic numbers are the square-free 3-almost primes.
Examples
The smallest sphenic number is 30 = 2 × 3 × 5, the product of the smallest three primes. The first few sphenic numbers are
30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, ... (sequence A007304 in the OEIS)
As of 2020 the largest known sphenic number is
(282,589,933 − 1) × (277,232,917 − 1) × (274,207,281 − 1).
It is the product of the three largest known primes.
Divisors
All sphenic numbers have exactly eight divisors. If we express the sphenic number as $n=p\cdot q\cdot r$, where p, q, and r are distinct primes, then the set of divisors of n will be:
$\left\{1,\ p,\ q,\ r,\ pq,\ pr,\ qr,\ n\right\}.$
The converse does not hold. For example, 24 is not a sphenic number, but it has exactly eight divisors.
Properties
All sphenic numbers are by definition squarefree, because the prime factors must be distinct.
The Möbius function of any sphenic number is −1.
The cyclotomic polynomials $\Phi _{n}(x)$, taken over all sphenic numbers n, may contain arbitrarily large coefficients[1] (for n a product of two primes the coefficients are $\pm 1$ or 0).
Any multiple of a sphenic number (except by 1) isn't a sphenic number. This is easily provable by the multiplication process adding another prime factor, or squaring an existing factor.
Consecutive sphenic numbers
The first case of two consecutive sphenic integers is 230 = 2×5×23 and 231 = 3×7×11. The first case of three is 1309 = 7×11×17, 1310 = 2×5×131, and 1311 = 3×19×23. There is no case of more than three, because every fourth consecutive positive integer is divisible by 4 = 2×2 and therefore not squarefree.
The numbers 2013 (3×11×61), 2014 (2×19×53), and 2015 (5×13×31) are all sphenic. The next three consecutive sphenic years will be 2665 (5×13×41), 2666 (2×31×43) and 2667 (3×7×127) (sequence A165936 in the OEIS).
See also
• Semiprimes, products of two prime numbers.
• Almost prime
References
1. Emma Lehmer, "On the magnitude of the coefficients of the cyclotomic polynomial", Bulletin of the American Mathematical Society 42 (1936), no. 6, pp. 389–392..
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Wikipedia
|
Sphenocorona
In geometry, the sphenocorona is one of the Johnson solids (J86). It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids.
Sphenocorona
TypeJohnson
J85 – J86 – J87
Faces2x2+2x4 triangles
2 squares
Edges22
Vertices10
Vertex configuration4(33.4)
2(32.42)
2x2(35)
Symmetry groupC2v
Dual polyhedron-
Propertiesconvex
Net
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.[1]
Johnson uses the prefix spheno- to refer to a wedge-like complex formed by two adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. Likewise, the suffix -corona refers to a crownlike complex of 8 equilateral triangles. Joining both complexes together results in the sphenocorona.[2]
Cartesian coordinates
Let k ≈ 0.85273 be the smallest positive root of the quartic polynomial
$60x^{4}-48x^{3}-100x^{2}+56x+23.$
Then, Cartesian coordinates of a sphenocorona with edge length 2 are given by the union of the orbits of the points
$\left(0,1,2{\sqrt {1-k^{2}}}\right),\,(2k,1,0),\left(0,1+{\frac {\sqrt {3-4k^{2}}}{\sqrt {1-k^{2}}}},{\frac {1-2k^{2}}{\sqrt {1-k^{2}}}}\right),\,\left(1,0,-{\sqrt {2+4k-4k^{2}}}\right)$
under the action of the group generated by reflections about the xz-plane and the yz-plane.[3]
One may then calculate the surface area of a sphenocorona of edge length a as
$A=\left(2+3{\sqrt {3}}\right)a^{2}\approx 7.19615a^{2},$[4]
and its volume as
$\left({\frac {1}{2}}{\sqrt {1+3{\sqrt {\frac {3}{2}}}+{\sqrt {13+3{\sqrt {6}}}}}}\right)a^{3}\approx 1.51535a^{3}.$[5]
Variations
The sphenocorona is also the vertex figure of the isogonal n-gonal double antiprismoid where n is an odd number greater than one, including the grand antiprism with pairs of trapezoid rather than square faces.
See also
• Augmented sphenocorona
References
1. Johnson, Norman W. (1966), "Convex polyhedra with regular faces", Canadian Journal of Mathematics, 18: 169–200, doi:10.4153/cjm-1966-021-8, MR 0185507, Zbl 0132.14603.
2. Johnson, Norman W. (1966), "Convex polyhedra with regular faces", Canadian Journal of Mathematics, 18: 169–200, doi:10.4153/cjm-1966-021-8, MR 0185507, S2CID 122006114, Zbl 0132.14603.
3. Timofeenko, A. V. (2009). "The non-Platonic and non-Archimedean noncomposite polyhedra". Journal of Mathematical Science. 162 (5): 718. doi:10.1007/s10958-009-9655-0. S2CID 120114341.
4. Wolfram Research, Inc. (2020). "Wolfram|Alpha Knowledgebase". Champaign, IL. PolyhedronData[{"Johnson", 86}, "SurfaceArea"] {{cite journal}}: Cite journal requires |journal= (help)
5. Wolfram Research, Inc. (2020). "Wolfram|Alpha Knowledgebase". Champaign, IL. PolyhedronData[{"Johnson", 86}, "Volume"] {{cite journal}}: Cite journal requires |journal= (help)
External links
• Eric W. Weisstein, Sphenocorona (Johnson solid) at MathWorld.
Johnson solids
Pyramids, cupolae and rotundae
• square pyramid
• pentagonal pyramid
• triangular cupola
• square cupola
• pentagonal cupola
• pentagonal rotunda
Modified pyramids
• elongated triangular pyramid
• elongated square pyramid
• elongated pentagonal pyramid
• gyroelongated square pyramid
• gyroelongated pentagonal pyramid
• triangular bipyramid
• pentagonal bipyramid
• elongated triangular bipyramid
• elongated square bipyramid
• elongated pentagonal bipyramid
• gyroelongated square bipyramid
Modified cupolae and rotundae
• elongated triangular cupola
• elongated square cupola
• elongated pentagonal cupola
• elongated pentagonal rotunda
• gyroelongated triangular cupola
• gyroelongated square cupola
• gyroelongated pentagonal cupola
• gyroelongated pentagonal rotunda
• gyrobifastigium
• triangular orthobicupola
• square orthobicupola
• square gyrobicupola
• pentagonal orthobicupola
• pentagonal gyrobicupola
• pentagonal orthocupolarotunda
• pentagonal gyrocupolarotunda
• pentagonal orthobirotunda
• elongated triangular orthobicupola
• elongated triangular gyrobicupola
• elongated square gyrobicupola
• elongated pentagonal orthobicupola
• elongated pentagonal gyrobicupola
• elongated pentagonal orthocupolarotunda
• elongated pentagonal gyrocupolarotunda
• elongated pentagonal orthobirotunda
• elongated pentagonal gyrobirotunda
• gyroelongated triangular bicupola
• gyroelongated square bicupola
• gyroelongated pentagonal bicupola
• gyroelongated pentagonal cupolarotunda
• gyroelongated pentagonal birotunda
Augmented prisms
• augmented triangular prism
• biaugmented triangular prism
• triaugmented triangular prism
• augmented pentagonal prism
• biaugmented pentagonal prism
• augmented hexagonal prism
• parabiaugmented hexagonal prism
• metabiaugmented hexagonal prism
• triaugmented hexagonal prism
Modified Platonic solids
• augmented dodecahedron
• parabiaugmented dodecahedron
• metabiaugmented dodecahedron
• triaugmented dodecahedron
• metabidiminished icosahedron
• tridiminished icosahedron
• augmented tridiminished icosahedron
Modified Archimedean solids
• augmented truncated tetrahedron
• augmented truncated cube
• biaugmented truncated cube
• augmented truncated dodecahedron
• parabiaugmented truncated dodecahedron
• metabiaugmented truncated dodecahedron
• triaugmented truncated dodecahedron
• gyrate rhombicosidodecahedron
• parabigyrate rhombicosidodecahedron
• metabigyrate rhombicosidodecahedron
• trigyrate rhombicosidodecahedron
• diminished rhombicosidodecahedron
• paragyrate diminished rhombicosidodecahedron
• metagyrate diminished rhombicosidodecahedron
• bigyrate diminished rhombicosidodecahedron
• parabidiminished rhombicosidodecahedron
• metabidiminished rhombicosidodecahedron
• gyrate bidiminished rhombicosidodecahedron
• tridiminished rhombicosidodecahedron
Elementary solids
• snub disphenoid
• snub square antiprism
• sphenocorona
• augmented sphenocorona
• sphenomegacorona
• hebesphenomegacorona
• disphenocingulum
• bilunabirotunda
• triangular hebesphenorotunda
(See also List of Johnson solids, a sortable table)
|
Wikipedia
|
Sphenomegacorona
In geometry, the sphenomegacorona is one of the Johnson solids (J88). It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids.
Sphenomegacorona
TypeJohnson
J87 – J88 – J89
Faces16 triangles
2 squares
Edges28
Vertices12
Vertex configuration2(34)
2(32.42)
2x2(35)
4(34.4)
Symmetry groupC2v
Dual polyhedron-
Propertiesconvex
Net
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.[1]
Johnson uses the prefix spheno- to refer to a wedge-like complex formed by two adjacent lunes, a lune being a square with equilateral triangles attached on opposite sides. Likewise, the suffix -megacorona refers to a crownlike complex of 12 triangles, contrasted with the smaller triangular complex that makes the sphenocorona. Joining both complexes together results in the sphenomegacorona.[1]
Cartesian coordinates
Let k ≈ 0.59463 be the smallest positive root of the polynomial
${\begin{aligned}&1680x^{16}-4800x^{15}-3712x^{14}+17216x^{13}+1568x^{12}-24576x^{11}+2464x^{10}+17248x^{9}\\&\quad {}-3384x^{8}-5584x^{7}+2000x^{6}+240x^{5}-776x^{4}+304x^{3}+200x^{2}-56x-23.\end{aligned}}$
Then, Cartesian coordinates of a sphenomegacorona with edge length 2 are given by the union of the orbits of the points
${\begin{aligned}&\left(0,1,2{\sqrt {1-k^{2}}}\right),\,(2k,1,0),\,\left(0,{\frac {\sqrt {3-4k^{2}}}{\sqrt {1-k^{2}}}}+1,{\frac {1-2k^{2}}{\sqrt {1-k^{2}}}}\right),\\&\left(1,0,-{\sqrt {2+4k-4k^{2}}}\right),\,\left(0,{\frac {{\sqrt {3-4k^{2}}}\left(2k^{2}-1\right)}{\left(k^{2}-1\right){\sqrt {1-k^{2}}}}}+1,{\frac {2k^{4}-1}{\left(1-k^{2}\right)^{\frac {3}{2}}}}\right)\end{aligned}}$
under the action of the group generated by reflections about the xz-plane and the yz-plane.[2]
We may then calculate the surface area of a sphenomegacorona of edge length a as
$A=\left(2+4{\sqrt {3}}\right)a^{2}\approx 8.92820a^{2},$[3]
and its volume as
$V=\xi a^{3}\approx 1.94811a^{3},$
where the decimal expansion of ξ is given by A334114.[4]
References
1. Johnson, Norman W. (1966), "Convex polyhedra with regular faces", Canadian Journal of Mathematics, 18: 169–200, doi:10.4153/cjm-1966-021-8, MR 0185507, S2CID 122006114, Zbl 0132.14603.
2. Timofeenko, A. V. (2009). "The non-Platonic and non-Archimedean noncomposite polyhedra". Journal of Mathematical Science. 162 (5): 720. doi:10.1007/s10958-009-9655-0. S2CID 120114341.
3. Wolfram Research, Inc. (2020). "Wolfram|Alpha Knowledgebase". Champaign, IL. PolyhedronData[{"Johnson", 88}, "SurfaceArea"] {{cite journal}}: Cite journal requires |journal= (help)
4. OEIS Foundation Inc. (2020), The On-Line Encyclopedia of Integer Sequences, A334114.
External links
• Eric W. Weisstein, Sphenomegacorona (Johnson solid) at MathWorld.
Johnson solids
Pyramids, cupolae and rotundae
• square pyramid
• pentagonal pyramid
• triangular cupola
• square cupola
• pentagonal cupola
• pentagonal rotunda
Modified pyramids
• elongated triangular pyramid
• elongated square pyramid
• elongated pentagonal pyramid
• gyroelongated square pyramid
• gyroelongated pentagonal pyramid
• triangular bipyramid
• pentagonal bipyramid
• elongated triangular bipyramid
• elongated square bipyramid
• elongated pentagonal bipyramid
• gyroelongated square bipyramid
Modified cupolae and rotundae
• elongated triangular cupola
• elongated square cupola
• elongated pentagonal cupola
• elongated pentagonal rotunda
• gyroelongated triangular cupola
• gyroelongated square cupola
• gyroelongated pentagonal cupola
• gyroelongated pentagonal rotunda
• gyrobifastigium
• triangular orthobicupola
• square orthobicupola
• square gyrobicupola
• pentagonal orthobicupola
• pentagonal gyrobicupola
• pentagonal orthocupolarotunda
• pentagonal gyrocupolarotunda
• pentagonal orthobirotunda
• elongated triangular orthobicupola
• elongated triangular gyrobicupola
• elongated square gyrobicupola
• elongated pentagonal orthobicupola
• elongated pentagonal gyrobicupola
• elongated pentagonal orthocupolarotunda
• elongated pentagonal gyrocupolarotunda
• elongated pentagonal orthobirotunda
• elongated pentagonal gyrobirotunda
• gyroelongated triangular bicupola
• gyroelongated square bicupola
• gyroelongated pentagonal bicupola
• gyroelongated pentagonal cupolarotunda
• gyroelongated pentagonal birotunda
Augmented prisms
• augmented triangular prism
• biaugmented triangular prism
• triaugmented triangular prism
• augmented pentagonal prism
• biaugmented pentagonal prism
• augmented hexagonal prism
• parabiaugmented hexagonal prism
• metabiaugmented hexagonal prism
• triaugmented hexagonal prism
Modified Platonic solids
• augmented dodecahedron
• parabiaugmented dodecahedron
• metabiaugmented dodecahedron
• triaugmented dodecahedron
• metabidiminished icosahedron
• tridiminished icosahedron
• augmented tridiminished icosahedron
Modified Archimedean solids
• augmented truncated tetrahedron
• augmented truncated cube
• biaugmented truncated cube
• augmented truncated dodecahedron
• parabiaugmented truncated dodecahedron
• metabiaugmented truncated dodecahedron
• triaugmented truncated dodecahedron
• gyrate rhombicosidodecahedron
• parabigyrate rhombicosidodecahedron
• metabigyrate rhombicosidodecahedron
• trigyrate rhombicosidodecahedron
• diminished rhombicosidodecahedron
• paragyrate diminished rhombicosidodecahedron
• metagyrate diminished rhombicosidodecahedron
• bigyrate diminished rhombicosidodecahedron
• parabidiminished rhombicosidodecahedron
• metabidiminished rhombicosidodecahedron
• gyrate bidiminished rhombicosidodecahedron
• tridiminished rhombicosidodecahedron
Elementary solids
• snub disphenoid
• snub square antiprism
• sphenocorona
• augmented sphenocorona
• sphenomegacorona
• hebesphenomegacorona
• disphenocingulum
• bilunabirotunda
• triangular hebesphenorotunda
(See also List of Johnson solids, a sortable table)
|
Wikipedia
|
Spherical circle
In spherical geometry, a spherical circle (often shortened to circle) is the locus of points on a sphere at constant spherical distance (the spherical radius) from a given point on the sphere (the pole or spherical center). It is a curve of constant geodesic curvature relative to the sphere, analogous to a line or circle in the Euclidean plane; the curves analogous to straight lines are called great circles, and the curves analogous to planar circles are called small circles or lesser circles.
Fundamental concepts
Intrinsic characterization
A spherical circle with zero geodesic curvature is called a great circle, and is a geodesic analogous to a straight line in the plane. A great circle separates the sphere into two equal hemispheres, each with the great circle as its boundary. If a great circle passes through a point on the sphere, it also passes through the antipodal point (the unique furthest other point on the sphere). For any pair of distinct non-antipodal points, a unique great circle passes through both. Any two points on a great circle separate it into two arcs analogous to line segments in the plane; the shorter is called the minor arc and is the shortest path between the points, and the longer is called the major arc.
A circle with non-zero geodesic curvature is called a small circle, and is analogous to a circle in the plane. A small circle separates the sphere into two spherical disks or spherical caps, each with the circle as its boundary. For any triple of distinct non-antipodal points a unique small circle passes through all three. Any two points on the small circle separate it into two arcs, analogous to circular arcs in the plane.
Every circle has two antipodal poles (or centers) intrinsic to the sphere. A great circle is equidistant to its poles, while a small circle is closer to one pole than the other. Concentric circles are sometimes called parallels, because they each have constant distance to each-other, and in particular to their concentric great circle, and are in that sense analogous to parallel lines in the plane.
Extrinsic characterization
If the sphere is isometrically embedded in Euclidean space, the sphere's intersection with a plane is a circle, which can be interpreted extrinsically to the sphere as a Euclidean circle: a locus of points in the plane at a constant Euclidean distance (the extrinsic radius) from a point in the plane (the extrinsic center). A great circle lies on a plane passing through the center of the sphere, so its extrinsic radius is equal to the radius of the sphere itself, and its extrinsic center is the sphere's center. A small circle lies on a plane not passing through the sphere's center, so its extrinsic radius is smaller than that of the sphere and its extrinsic center is an arbitrary point in the interior of the sphere. Parallel planes cut the sphere into parallel (concentric) small circles; the pair of parallel planes tangent to the sphere are tangent at the poles of these circles, and the diameter through these poles, passing through the sphere's center and perpendicular to the parallel planes, is called the axis of the parallel circles.
The sphere's intersection with a second sphere is also a circle, and the sphere's intersection with a concentric right circular cylinder or right circular cone is a pair of antipodal circles.
Applications
Geodesy
In the geographic coordinate system on a globe, the parallels of latitude are small circles, with the Equator the only great circle. By contrast, all meridians of longitude, paired with their opposite meridian in the other hemisphere, form great circles.
Notes
References
• Allardice, Robert Edgar (1883), "Spherical Geometry", Proceedings of the Edinburgh Mathematical Society, 2: 8–16, doi:10.1017/S0013091500037020
• Casey, John (1889), A treatise on spherical trigonometry, Hodges, Figgis, & co.
• Papadopoulos, Athanase (2014), "On the works of Euler and his followers on spherical geometry", Gaṇita Bhārati, 36: 53–108, arXiv:1409.4736
• Todhunter, Isaac; Leathem, John Gaston (1901), Spherical Trigonometry (Revised ed.), MacMillan
|
Wikipedia
|
Inversive geometry
In geometry, inversive geometry is the study of inversion, a transformation of the Euclidean plane that maps circles or lines to other circles or lines and that preserves the angles between crossing curves. Many difficult problems in geometry become much more tractable when an inversion is applied. Inversion seems to have been discovered by a number of people contemporaneously, including Steiner (1824), Quetelet (1825), Bellavitis (1836), Stubbs and Ingram (1842-3) and Kelvin (1845).[1]
For other uses, see Point reflection.
The concept of inversion can be generalized to higher-dimensional spaces.
Inversion in a circle
Inverse of a point
To invert a number in arithmetic usually means to take its reciprocal. A closely related idea in geometry is that of "inverting" a point. In the plane, the inverse of a point P with respect to a reference circle (Ø) with center O and radius r is a point P', lying on the ray from O through P such that
$OP\cdot OP^{\prime }=r^{2}.$
This is called circle inversion or plane inversion. The inversion taking any point P (other than O) to its image P' also takes P' back to P, so the result of applying the same inversion twice is the identity transformation on all the points of the plane other than O (self-inversion).[2][3] To make inversion an involution it is necessary to introduce a point at infinity, a single point placed on all the lines, and extend the inversion, by definition, to interchange the center O and this point at infinity.
It follows from the definition that the inversion of any point inside the reference circle must lie outside it, and vice versa, with the center and the point at infinity changing positions, whilst any point on the circle is unaffected (is invariant under inversion). In summary, the nearer a point to the center, the further away its transformation, and vice versa.
Compass and straightedge construction
Point outside circle
To construct the inverse P' of a point P outside a circle Ø:
• Draw the segment from O (center of circle Ø) to P.
• Let M be the midpoint of OP. (Not shown)
• Draw the circle c with center M going through P. (Not labeled. It's the blue circle)
• Let N and N' be the points where Ø and c intersect.
• Draw segment NN'.
• P' is where OP and NN' intersect.
Point inside circle
To construct the inverse P of a point P' inside a circle Ø:
• Draw ray r from O (center of circle Ø) through P'. (Not labeled, it's the horizontal line)
• Draw line s through P' perpendicular to r. (Not labeled. It's the vertical line)
• Let N be one of the points where Ø and s intersect.
• Draw the segment ON.
• Draw line t through N perpendicular to ON.
• P is where ray r and line t intersect.
Dutta's construction
There is a construction of the inverse point to A with respect to a circle P that is independent of whether A is inside or outside P.[4]
Consider a circle P with center O and a point A which may lie inside or outside the circle P.
• Take the intersection point C of the ray OA with the circle P.
• Connect the point C with an arbitrary point B on the circle P (different from C)
• Let h be the reflection of ray BA in line BC. Then h cuts ray OC in a point A'. A' is the inverse point of A with respect to circle P.[4]: § 3.2
Properties
• The inverse, with respect to the red circle, of a circle going through O (blue) is a line not going through O (green), and vice versa.
• The inverse, with respect to the red circle, of a circle not going through O (blue) is a circle not going through O (green), and vice versa.
• Inversion with respect to a circle does not map the center of the circle to the center of its image
The inversion of a set of points in the plane with respect to a circle is the set of inverses of these points. The following properties make circle inversion useful.
• A circle that passes through the center O of the reference circle inverts to a line not passing through O, but parallel to the tangent to the original circle at O, and vice versa; whereas a line passing through O is inverted into itself (but not pointwise invariant).[5]
• A circle not passing through O inverts to a circle not passing through O. If the circle meets the reference circle, these invariant points of intersection are also on the inverse circle. A circle (or line) is unchanged by inversion if and only if it is orthogonal to the reference circle at the points of intersection.[6]
Additional properties include:
• If a circle q passes through two distinct points A and A' which are inverses with respect to a circle k, then the circles k and q are orthogonal.
• If the circles k and q are orthogonal, then a straight line passing through the center O of k and intersecting q, does so at inverse points with respect to k.
• Given a triangle OAB in which O is the center of a circle k, and points A' and B' inverses of A and B with respect to k, then
$\angle OAB=\angle OB'A'\ {\text{ and }}\ \angle OBA=\angle OA'B'.$
• The points of intersection of two circles p and q orthogonal to a circle k, are inverses with respect to k.
• If M and M' are inverse points with respect to a circle k on two curves m and m', also inverses with respect to k, then the tangents to m and m' at the points M and M' are either perpendicular to the straight line MM' or form with this line an isosceles triangle with base MM'.
• Inversion leaves the measure of angles unaltered, but reverses the orientation of oriented angles.[7]
Examples in two dimensions
• Inversion of a line is a circle containing the center of inversion; or it is the line itself if it contains the center
• Inversion of a circle is another circle; or it is a line if the original circle contains the center
• Inversion of a parabola is a cardioid
• Inversion of hyperbola is a lemniscate of Bernoulli
Application
For a circle not passing through the center of inversion, the center of the circle being inverted and the center of its image under inversion are collinear with the center of the reference circle. This fact can be used to prove that the Euler line of the intouch triangle of a triangle coincides with its OI line. The proof roughly goes as below:
Invert with respect to the incircle of triangle ABC. The medial triangle of the intouch triangle is inverted into triangle ABC, meaning the circumcenter of the medial triangle, that is, the nine-point center of the intouch triangle, the incenter and circumcenter of triangle ABC are collinear.
Any two non-intersecting circles may be inverted into concentric circles. Then the inversive distance (usually denoted δ) is defined as the natural logarithm of the ratio of the radii of the two concentric circles.
In addition, any two non-intersecting circles may be inverted into congruent circles, using circle of inversion centered at a point on the circle of antisimilitude.
The Peaucellier–Lipkin linkage is a mechanical implementation of inversion in a circle. It provides an exact solution to the important problem of converting between linear and circular motion.
Pole and polar
Main article: pole and polar
If point R is the inverse of point P then the lines perpendicular to the line PR through one of the points is the polar of the other point (the pole).
Poles and polars have several useful properties:
• If a point P lies on a line l, then the pole L of the line l lies on the polar p of point P.
• If a point P moves along a line l, its polar p rotates about the pole L of the line l.
• If two tangent lines can be drawn from a pole to the circle, then its polar passes through both tangent points.
• If a point lies on the circle, its polar is the tangent through this point.
• If a point P lies on its own polar line, then P is on the circle.
• Each line has exactly one pole.
In three dimensions
Circle inversion is generalizable to sphere inversion in three dimensions. The inversion of a point P in 3D with respect to a reference sphere centered at a point O with radius R is a point P ' on the ray with direction OP such that $OP\cdot OP^{\prime }=||OP||\cdot ||OP^{\prime }||=R^{2}$. As with the 2D version, a sphere inverts to a sphere, except that if a sphere passes through the center O of the reference sphere, then it inverts to a plane. Any plane passing through O, inverts to a sphere touching at O. A circle, that is, the intersection of a sphere with a secant plane, inverts into a circle, except that if the circle passes through O it inverts into a line. This reduces to the 2D case when the secant plane passes through O, but is a true 3D phenomenon if the secant plane does not pass through O.
Sphere
The simplest surface (besides a plane) is the sphere. The first picture shows a non trivial inversion (the center of the sphere is not the center of inversion) of a sphere together with two orthogonal intersecting pencils of circles.
Cylinder, cone, torus
The inversion of a cylinder, cone, or torus results in a Dupin cyclide.
Spheroid
A spheroid is a surface of revolution and contains a pencil of circles which is mapped onto a pencil of circles (see picture). The inverse image of a spheroid is a surface of degree 4.
Hyperboloid of one sheet
A hyperboloid of one sheet, which is a surface of revolution contains a pencil of circles which is mapped onto a pencil of circles. A hyperboloid of one sheet contains additional two pencils of lines, which are mapped onto pencils of circles. The picture shows one such line (blue) and its inversion.
Stereographic projection as the inversion of a sphere
A stereographic projection usually projects a sphere from a point $N$ (north pole) of the sphere onto the tangent plane at the opposite point $S$ (south pole). This mapping can be performed by an inversion of the sphere onto its tangent plane. If the sphere (to be projected) has the equation $x^{2}+y^{2}+z^{2}=-z$ (alternately written $x^{2}+y^{2}+(z+{\tfrac {1}{2}})^{2}={\tfrac {1}{4}}$; center $(0,0,-0.5)$, radius $0.5$, green in the picture), then it will be mapped by the inversion at the unit sphere (red) onto the tangent plane at point $S=(0,0,-1)$. The lines through the center of inversion (point $N$) are mapped onto themselves. They are the projection lines of the stereographic projection.
6-sphere coordinates
The 6-sphere coordinates are a coordinate system for three-dimensional space obtained by inverting the Cartesian coordinates.
Axiomatics and generalization
One of the first to consider foundations of inversive geometry was Mario Pieri in 1911 and 1912.[8] Edward Kasner wrote his thesis on "Invariant theory of the inversion group".[9]
More recently the mathematical structure of inversive geometry has been interpreted as an incidence structure where the generalized circles are called "blocks": In incidence geometry, any affine plane together with a single point at infinity forms a Möbius plane, also known as an inversive plane. The point at infinity is added to all the lines. These Möbius planes can be described axiomatically and exist in both finite and infinite versions.
A model for the Möbius plane that comes from the Euclidean plane is the Riemann sphere.
Invariant
The cross-ratio between 4 points $x,y,z,w$ is invariant under an inversion. In particular if O is the centre of the inversion and $r_{1}$ and $r_{2}$ are distances to the ends of a line L, then length of the line $d$ will become $d/(r_{1}r_{2})$ under an inversion with centre O. The invariant is:
$I={\frac {|x-y||w-z|}{|x-w||y-z|}}$
Relation to Erlangen program
According to Coxeter,[10] the transformation by inversion in circle was invented by L. I. Magnus in 1831. Since then this mapping has become an avenue to higher mathematics. Through some steps of application of the circle inversion map, a student of transformation geometry soon appreciates the significance of Felix Klein’s Erlangen program, an outgrowth of certain models of hyperbolic geometry
Dilation
The combination of two inversions in concentric circles results in a similarity, homothetic transformation, or dilation characterized by the ratio of the circle radii.
$x\mapsto R^{2}{\frac {x}{|x|^{2}}}=y\mapsto T^{2}{\frac {y}{|y|^{2}}}=\left({\frac {T}{R}}\right)^{2}x.$
Reciprocation
When a point in the plane is interpreted as a complex number $z=x+iy,$ with complex conjugate ${\bar {z}}=x-iy,$ then the reciprocal of z is
${\frac {1}{z}}={\frac {\bar {z}}{|z|^{2}}}.$
Consequently, the algebraic form of the inversion in a unit circle is given by $z\mapsto w$ where:
$w={\frac {1}{\bar {z}}}={\overline {\left({\frac {1}{z}}\right)}}$.
Reciprocation is key in transformation theory as a generator of the Möbius group. The other generators are translation and rotation, both familiar through physical manipulations in the ambient 3-space. Introduction of reciprocation (dependent upon circle inversion) is what produces the peculiar nature of Möbius geometry, which is sometimes identified with inversive geometry (of the Euclidean plane). However, inversive geometry is the larger study since it includes the raw inversion in a circle (not yet made, with conjugation, into reciprocation). Inversive geometry also includes the conjugation mapping. Neither conjugation nor inversion-in-a-circle are in the Möbius group since they are non-conformal (see below). Möbius group elements are analytic functions of the whole plane and so are necessarily conformal.
Transforming circles into circles
Consider, in the complex plane, the circle of radius $r$ around the point $a$
$(z-a)(z-a)^{*}=r^{2}$
where without loss of generality, $a\in \mathbb {R} .$ Using the definition of inversion
$w={\frac {1}{z^{*}}}$
it is straightforward to show that $w$ obeys the equation
$ww^{*}-{\frac {a}{(a^{2}-r^{2})}}(w+w^{*})+{\frac {a^{2}}{(a^{2}-r^{2})^{2}}}={\frac {r^{2}}{(a^{2}-r^{2})^{2}}}$
and hence that $w$ describes the circle of center $ {\frac {a}{a^{2}-r^{2}}}$ and radius $ {\frac {r}{|a^{2}-r^{2}|}}.$
When $a\to r,$ the circle transforms into the line parallel to the imaginary axis $w+w^{*}={\tfrac {1}{a}}.$
For $a\not \in \mathbb {R} $ and $aa^{*}\neq r^{2}$ the result for $w$ is
${\begin{aligned}&ww^{*}-{\frac {aw+a^{*}w^{*}}{(a^{*}a-r^{2})}}+{\frac {aa^{*}}{(aa^{*}-r^{2})^{2}}}={\frac {r^{2}}{(aa^{*}-r^{2})^{2}}}\\[4pt]\Longleftrightarrow {}&\left(w-{\frac {a^{*}}{aa^{*}-r^{2}}}\right)\left(w^{*}-{\frac {a}{a^{*}a-r^{2}}}\right)=\left({\frac {r}{\left|aa^{*}-r^{2}\right|}}\right)^{2}\end{aligned}}$
showing that the $w$ describes the circle of center $ {\frac {a}{(aa^{*}-r^{2})}}$ and radius $ {\frac {r}{\left|a^{*}a-r^{2}\right|}}$.
When $a^{*}a\to r^{2},$ the equation for $w$ becomes
${\begin{aligned}&aw+a^{*}w^{*}=1\Longleftrightarrow 2\operatorname {Re} \{aw\}=1\Longleftrightarrow \operatorname {Re} \{a\}\operatorname {Re} \{w\}-\operatorname {Im} \{a\}\operatorname {Im} \{w\}={\frac {1}{2}}\\[4pt]\Longleftrightarrow {}&\operatorname {Im} \{w\}={\frac {\operatorname {Re} \{a\}}{\operatorname {Im} \{a\}}}\cdot \operatorname {Re} \{w\}-{\frac {1}{2\cdot \operatorname {Im} \{a\}}}.\end{aligned}}$
Higher geometry
As mentioned above, zero, the origin, requires special consideration in the circle inversion mapping. The approach is to adjoin a point at infinity designated ∞ or 1/0 . In the complex number approach, where reciprocation is the apparent operation, this procedure leads to the complex projective line, often called the Riemann sphere. It was subspaces and subgroups of this space and group of mappings that were applied to produce early models of hyperbolic geometry by Beltrami, Cayley, and Klein. Thus inversive geometry includes the ideas originated by Lobachevsky and Bolyai in their plane geometry. Furthermore, Felix Klein was so overcome by this facility of mappings to identify geometrical phenomena that he delivered a manifesto, the Erlangen program, in 1872. Since then many mathematicians reserve the term geometry for a space together with a group of mappings of that space. The significant properties of figures in the geometry are those that are invariant under this group.
For example, Smogorzhevsky[11] develops several theorems of inversive geometry before beginning Lobachevskian geometry.
In higher dimensions
In a real n-dimensional Euclidean space, an inversion in the sphere of radius r centered at the point $O=(o_{1},...,o_{n})$ is a map of an arbitrary point $P=(p_{1},...,p_{n})$ found by inverting the length of the displacement vector $P-O$ and multiplying by $r^{2}$:
${\begin{aligned}P&\mapsto P'=O+{\frac {r^{2}(P-O)}{\|P-O\|^{2}}},\\[5mu]p_{j}&\mapsto p_{j}'=o_{j}+{\frac {r^{2}(p_{j}-o_{j})}{\sum _{k}(p_{k}-o_{k})^{2}}}.\end{aligned}}$
The transformation by inversion in hyperplanes or hyperspheres in En can be used to generate dilations, translations, or rotations. Indeed, two concentric hyperspheres, used to produce successive inversions, result in a dilation or homothety about the hyperspheres' center.
When two parallel hyperplanes are used to produce successive reflections, the result is a translation. When two hyperplanes intersect in an (n–2)-flat, successive reflections produce a rotation where every point of the (n–2)-flat is a fixed point of each reflection and thus of the composition.
Any combination of reflections, translations, and rotations is called an isometry. Any combination of reflections, dilations, translations, and rotations is a similarity.
All of these are conformal maps, and in fact, where the space has three or more dimensions, the mappings generated by inversion are the only conformal mappings. Liouville's theorem is a classical theorem of conformal geometry.
The addition of a point at infinity to the space obviates the distinction between hyperplane and hypersphere; higher dimensional inversive geometry is frequently studied then in the presumed context of an n-sphere as the base space. The transformations of inversive geometry are often referred to as Möbius transformations. Inversive geometry has been applied to the study of colorings, or partitionings, of an n-sphere.[12]
Anticonformal mapping property
The circle inversion map is anticonformal, which means that at every point it preserves angles and reverses orientation (a map is called conformal if it preserves oriented angles). Algebraically, a map is anticonformal if at every point the Jacobian is a scalar times an orthogonal matrix with negative determinant: in two dimensions the Jacobian must be a scalar times a reflection at every point. This means that if J is the Jacobian, then $J\cdot J^{T}=kI$ and $\det(J)=-{\sqrt {k}}.$ Computing the Jacobian in the case zi = xi/‖x‖2, where ‖x‖2 = x12 + ... + xn2 gives JJT = kI, with k = 1/‖x‖4, and additionally det(J) is negative; hence the inversive map is anticonformal.
In the complex plane, the most obvious circle inversion map (i.e., using the unit circle centered at the origin) is the complex conjugate of the complex inverse map taking z to 1/z. The complex analytic inverse map is conformal and its conjugate, circle inversion, is anticonformal. In this case a homography is conformal while an anti-homography is anticonformal.
Inversive geometry and hyperbolic geometry
The (n − 1)-sphere with equation
$x_{1}^{2}+\cdots +x_{n}^{2}+2a_{1}x_{1}+\cdots +2a_{n}x_{n}+c=0$
will have a positive radius if a12 + ... + an2 is greater than c, and on inversion gives the sphere
$x_{1}^{2}+\cdots +x_{n}^{2}+2{\frac {a_{1}}{c}}x_{1}+\cdots +2{\frac {a_{n}}{c}}x_{n}+{\frac {1}{c}}=0.$
Hence, it will be invariant under inversion if and only if c = 1. But this is the condition of being orthogonal to the unit sphere. Hence we are led to consider the (n − 1)-spheres with equation
$x_{1}^{2}+\cdots +x_{n}^{2}+2a_{1}x_{1}+\cdots +2a_{n}x_{n}+1=0,$
which are invariant under inversion, orthogonal to the unit sphere, and have centers outside of the sphere. These together with the subspace hyperplanes separating hemispheres are the hypersurfaces of the Poincaré disc model of hyperbolic geometry.
Since inversion in the unit sphere leaves the spheres orthogonal to it invariant, the inversion maps the points inside the unit sphere to the outside and vice versa. This is therefore true in general of orthogonal spheres, and in particular inversion in one of the spheres orthogonal to the unit sphere maps the unit sphere to itself. It also maps the interior of the unit sphere to itself, with points outside the orthogonal sphere mapping inside, and vice versa; this defines the reflections of the Poincaré disc model if we also include with them the reflections through the diameters separating hemispheres of the unit sphere. These reflections generate the group of isometries of the model, which tells us that the isometries are conformal. Hence, the angle between two curves in the model is the same as the angle between two curves in the hyperbolic space.
See also
• Circle of antisimilitude
• Duality (projective geometry)
• Inverse curve
• Limiting point (geometry)
• Möbius transformation
• Projective geometry
• Soddy's hexlet
• Inversion of curves and surfaces (German)
Notes
1. Curves and Their Properties by Robert C. Yates, National Council of Teachers of Mathematics, Inc.,Washington, D.C., p. 127: "Geometrical inversion seems to be due to Jakob Steiner who indicated a knowledge of the subject in 1824. He was closely followed by Adolphe Quetelet (1825) who gave some examples. Apparently independently discovered by Giusto Bellavitis in 1836, by Stubbs and Ingram in 1842-3, and by Lord Kelvin in 1845.)"
2. Altshiller-Court (1952, p. 230)
3. Kay (1969, p. 264)
4. Dutta, Surajit (2014) A simple property of isosceles triangles with applications, Forum Geometricorum 14: 237–240
5. Kay (1969, p. 265)
6. Kay (1969, p. 265)
7. Kay (1969, p. 269)
8. M. Pieri (1911,12) "Nuovi principia di geometria della inversion", Giornal di Matematiche di Battaglini 49:49–96 & 50:106–140
9. Kasner, E. (1900). "The Invariant Theory of the Inversion Group: Geometry Upon a Quadric Surface". Transactions of the American Mathematical Society. 1 (4): 430–498. doi:10.1090/S0002-9947-1900-1500550-1. hdl:2027/miun.abv0510.0001.001. JSTOR 1986367.
10. Coxeter 1969, pp. 77–95
11. A.S. Smogorzhevsky (1982) Lobachevskian Geometry, Mir Publishers, Moscow
12. Joel C. Gibbons & Yushen Luo (2013) Colorings of the n-sphere and inversive geometry
References
• Altshiller-Court, Nathan (1952), College Geometry: An Introduction to the Modern Geometry of the Triangle and the Circle (2nd ed.), New York: Barnes & Noble, LCCN 52-13504
• Blair, David E. (2000), Inversion Theory and Conformal Mapping, American Mathematical Society, ISBN 0-8218-2636-0
• Brannan, David A.; Esplen, Matthew F.; Gray, Jeremy J. (1998), "Chapter 5: Inversive Geometry", Geometry, Cambridge: Cambridge University Press, pp. 199–260, ISBN 0-521-59787-0
• Coxeter, H.S.M. (1969) [1961], Introduction to Geometry (2nd ed.), John Wiley & Sons, ISBN 0-471-18283-4
• Hartshorne, Robin (2000), "Chapter 7: Non-Euclidean Geometry, Section 37: Circular Inversion", Geometry: Euclid and Beyond, Springer, ISBN 0-387-98650-2
• Kay, David C. (1969), College Geometry, New York: Holt, Rinehart and Winston, LCCN 69-12075
External links
• Inversion: Reflection in a Circle at cut-the-knot
• Wilson Stother's inversive geometry page
• IMO Compendium Training Materials practice problems on how to use inversion for math olympiad problems
• Weisstein, Eric W. "Inversion". MathWorld.
• Visual Dictionary of Special Plane Curves Xah Lee
|
Wikipedia
|
Sphere bundle
In the mathematical field of topology, a sphere bundle is a fiber bundle in which the fibers are spheres $S^{n}$ of some dimension n.[1] Similarly, in a disk bundle, the fibers are disks $D^{n}$. From a topological perspective, there is no difference between sphere bundles and disk bundles: this is a consequence of the Alexander trick, which implies $\operatorname {BTop} (D^{n+1})\simeq \operatorname {BTop} (S^{n}).$
An example of a sphere bundle is the torus, which is orientable and has $S^{1}$ fibers over an $S^{1}$ base space. The non-orientable Klein bottle also has $S^{1}$ fibers over an $S^{1}$ base space, but has a twist that produces a reversal of orientation as one follows the loop around the base space.[1]
A circle bundle is a special case of a sphere bundle.
Orientation of a sphere bundle
A sphere bundle that is a product space is orientable, as is any sphere bundle over a simply connected space.[1]
If E be a real vector bundle on a space X and if E is given an orientation, then a sphere bundle formed from E, Sph(E), inherits the orientation of E.
Spherical fibration
A spherical fibration, a generalization of the concept of a sphere bundle, is a fibration whose fibers are homotopy equivalent to spheres. For example, the fibration
$\operatorname {BTop} (\mathbb {R} ^{n})\to \operatorname {BTop} (S^{n})$
has fibers homotopy equivalent to Sn.[2]
See also
• Smale conjecture
Notes
1. Hatcher, Allen (2002). Algebraic Topology. Cambridge University Press. p. 442. ISBN 9780521795401. Retrieved 28 February 2018.
2. Since, writing $X^{+}$ for the one-point compactification of $X$, the homotopy fiber of $\operatorname {BTop} (X)\to \operatorname {BTop} (X^{+})$ is $\operatorname {Top} (X^{+})/\operatorname {Top} (X)\simeq X^{+}$.
References
• Dennis Sullivan, Geometric Topology, the 1970 MIT notes
Further reading
• The Adams conjecture I
• Johannes Ebert, The Adams Conjecture, after Edgar Brown
• Strunk, Florian. On motivic spherical bundles
External links
• Is it true that all sphere bundles are boundaries of disk bundles?
• https://ncatlab.org/nlab/show/spherical+fibration
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
|
Wikipedia
|
Sphere eversion
In differential topology, sphere eversion is the process of turning a sphere inside out in a three-dimensional space (the word eversion means "turning inside out"). Remarkably, it is possible to smoothly and continuously turn a sphere inside out in this way (allowing self-intersections of the sphere's surface) without cutting or tearing it or creating any crease. This is surprising, both to non-mathematicians and to those who understand regular homotopy, and can be regarded as a veridical paradox; that is something that, while being true, on first glance seems false.
Not to be confused with Sphere inversion.
More precisely, let
$f\colon S^{2}\to \mathbb {R} ^{3}$
be the standard embedding; then there is a regular homotopy of immersions
$f_{t}\colon S^{2}\to \mathbb {R} ^{3}$
such that ƒ0 = ƒ and ƒ1 = −ƒ.
History
An existence proof for crease-free sphere eversion was first created by Stephen Smale (1957). It is difficult to visualize a particular example of such a turning, although some digital animations have been produced that make it somewhat easier. The first example was exhibited through the efforts of several mathematicians, including Arnold S. Shapiro and Bernard Morin, who was blind. On the other hand, it is much easier to prove that such a "turning" exists, and that is what Smale did.
Smale's graduate adviser Raoul Bott at first told Smale that the result was obviously wrong (Levy 1995). His reasoning was that the degree of the Gauss map must be preserved in such "turning"—in particular it follows that there is no such turning of S1 in R2. But the degrees of the Gauss map for the embeddings f and −f in R3 are both equal to 1, and do not have opposite sign as one might incorrectly guess. The degree of the Gauss map of all immersions of S2 in R3 is 1, so there is no obstacle. The term "veridical paradox" applies perhaps more appropriately at this level: until Smale's work, there was no documented attempt to argue for or against the eversion of S2, and later efforts are in hindsight, so there never was a historical paradox associated with sphere eversion, only an appreciation of the subtleties in visualizing it by those confronting the idea for the first time.
See h-principle for further generalizations.
Proof
Smale's original proof was indirect: he identified (regular homotopy) classes of immersions of spheres with a homotopy group of the Stiefel manifold. Since the homotopy group that corresponds to immersions of $S^{2}$ in $\mathbb {R} ^{3}$ vanishes, the standard embedding and the inside-out one must be regular homotopic. In principle the proof can be unwound to produce an explicit regular homotopy, but this is not easy to do.
There are several ways of producing explicit examples and mathematical visualization:
• Half-way models: these consist of very special homotopies. This is the original method, first done by Shapiro and Phillips via Boy's surface, later refined by many others. The original half-way model homotopies were constructed by hand, and worked topologically but weren't minimal. The movie created by Nelson Max, over a seven-year period, and based on Charles Pugh's chicken-wire models (subsequently stolen from the Mathematics Department at Berkeley), was a computer-graphics 'tour de force' for its time, and set the bench-mark for computer animation for many years. A more recent and definitive graphics refinement (1980s) is minimax eversions, which is a variational method, and consist of special homotopies (they are shortest paths with respect to Willmore energy). In turn, understanding behavior of Willmore energy requires understanding solutions of fourth-order partial differential equations, and so the visually beautiful and evocative images belie some very deep mathematics beyond Smale's original abstract proof.
• Thurston's corrugations: this is a topological method and generic; it takes a homotopy and perturbs it so that it becomes a regular homotopy. This is illustrated in the computer-graphics animation Outside In developed at the Geometry Center under the direction of Silvio Levy, Delle Maxwell and Tamara Munzner.[2]
• Combining the above methods, the complete sphere eversion can be described by a set of closed equations giving minimal topological complexity [1]
Variations
• A six-dimensional sphere $S^{6}$ in seven-dimensional euclidean space $\mathbb {R} ^{7}$ admits eversion. With an evident case of an 0-dimensional sphere $S^{0}$ (two distinct points) in a real line $\mathbb {R} $ and described above case of a two-dimensional sphere in $\mathbb {R} ^{3}$ there are only three cases when sphere $S^{n}$ embedded in euclidean space $\mathbb {R} ^{n+1}$ admits eversion.
Gallery of eversion steps
Surface plots
Ruled model of halfway with quadruple point
top view
diagonal view
side view
Closed halfway
top view
diagonal view
side view
Ruled model of death of triple points
top view
diagonal view
side view
Ruled model of end of central intersection loop
top view
diagonal view
side view
Ruled model of last stage
top view
diagonal view
side view
Nylon string open model
halfway top
halfway side
triple death top
triple death side
intersection end top
intersection end side
See also
• Whitney–Graustein theorem
References
1. Bednorz, Adam; Bednorz, Witold (2019). "Analytic sphere eversion using ruled surfaces". Differential Geometry and Its Applications. 64: 59–79. arXiv:1711.10466. doi:10.1016/j.difgeo.2019.02.004. S2CID 119687494.
2. "Outside In: Introduction". The Geometry Center. Retrieved 21 June 2017.
Bibliography
• Iain R. Aitchison (2010) The `Holiverse': holistic eversion of the 2-sphere in R^3, preprint. arXiv:1008.0916.
• John B. Etnyre (2004) Review of "h-principles and flexibility in geometry", MR1982875.
• Francis, George K. (2007), A topological picturebook, Berlin, New York: Springer-Verlag, ISBN 978-0-387-34542-0, MR 2265679
• George K. Francis & Bernard Morin (1980) "Arnold Shapiro's Eversion of the Sphere", Mathematical Intelligencer 2(4):200–3.
• Levy, Silvio (1995), "A brief history of sphere eversions", Making waves, Wellesley, MA: A K Peters Ltd., ISBN 978-1-56881-049-2, MR 1357900
• Max, Nelson (1977) "Turning a Sphere Inside Out", https://www.crcpress.com/Turning-a-Sphere-Inside-Out-DVD/Max/9781466553941
• Anthony Phillips (May 1966) "Turning a surface inside out", Scientific American, pp. 112–120.
• Smale, Stephen (1958), "A classification of immersions of the two-sphere", Transactions of the American Mathematical Society, 90 (2): 281–290, doi:10.2307/1993205, ISSN 0002-9947, JSTOR 1993205, MR 0104227
External links
• A History of Sphere Eversions
• "Turning a Sphere Inside Out"
• Software for visualizing sphere eversion
• Mathematics visualization: topology. The holiverse sphere eversion (Povray animation)
• The deNeve/Hills sphere eversion: video and interactive model
• Patrick Massot's project to formalise the proof in the Lean Theorem Prover
• An interactive exploration of Adam Bednorz and Witold Bednorz method of sphere eversion
• Outside In: A video exploration of sphere eversion, created by The Geometry Center of The University of Minnesota.
|
Wikipedia
|
Kissing number
In geometry, the kissing number of a mathematical space is defined as the greatest number of non-overlapping unit spheres that can be arranged in that space such that they each touch a common unit sphere. For a given sphere packing (arrangement of spheres) in a given space, a kissing number can also be defined for each individual sphere as the number of spheres it touches. For a lattice packing the kissing number is the same for every sphere, but for an arbitrary sphere packing the kissing number may vary from one sphere to another.
Other names for kissing number that have been used are Newton number (after the originator of the problem), and contact number.
In general, the kissing number problem seeks the maximum possible kissing number for n-dimensional spheres in (n + 1)-dimensional Euclidean space. Ordinary spheres correspond to two-dimensional closed surfaces in three-dimensional space.
Finding the kissing number when centers of spheres are confined to a line (the one-dimensional case) or a plane (two-dimensional case) is trivial. Proving a solution to the three-dimensional case, despite being easy to conceptualise and model in the physical world, eluded mathematicians until the mid-20th century.[1][2] Solutions in higher dimensions are considerably more challenging, and only a handful of cases have been solved exactly. For others investigations have determined upper and lower bounds, but not exact solutions.[3]
Known greatest kissing numbers
One dimension
In one dimension,[4] the kissing number is 2:
Two dimensions
In two dimensions, the kissing number is 6:
Proof: Consider a circle with center C that is touched by circles with centers C1, C2, .... Consider the rays C Ci. These rays all emanate from the same center C, so the sum of angles between adjacent rays is 360°.
Assume by contradiction that there are more than six touching circles. Then at least two adjacent rays, say C C1 and C C2, are separated by an angle of less than 60°. The segments C Ci have the same length – 2r – for all i. Therefore, the triangle C C1 C2 is isosceles, and its third side – C1 C2 – has a side length of less than 2r. Therefore, the circles 1 and 2 intersect – a contradiction.[5]
Three dimensions
In three dimensions, the kissing number is 12, but the correct value was much more difficult to establish than in dimensions one and two. It is easy to arrange 12 spheres so that each touches a central sphere, with a lot of space left over, and it is not obvious that there is no way to pack in a 13th sphere. (In fact, there is so much extra space that any two of the 12 outer spheres can exchange places through a continuous movement without any of the outer spheres losing contact with the center one.) This was the subject of a famous disagreement between mathematicians Isaac Newton and David Gregory. Newton correctly thought that the limit was 12; Gregory thought that a 13th could fit. Some incomplete proofs that Newton was correct were offered in the nineteenth century, most notably one by Reinhold Hoppe, but the first correct proof (according to Brass, Moser, and Pach) did not appear until 1953.[1][2][6]
The twelve neighbors of the central sphere correspond to the maximum bulk coordination number of an atom in a crystal lattice in which all atoms have the same size (as in a chemical element). A coordination number of 12 is found in a cubic close-packed or a hexagonal close-packed structure.
Larger dimensions
In four dimensions, it was known for some time that the answer was either 24 or 25. It is straightforward to produce a packing of 24 spheres around a central sphere (one can place the spheres at the vertices of a suitably scaled 24-cell centered at the origin). As in the three-dimensional case, there is a lot of space left over — even more, in fact, than for n = 3 — so the situation was even less clear. In 2003, Oleg Musin proved the kissing number for n = 4 to be 24.[7][8]
The kissing number in n dimensions is unknown for n > 4, except for n = 8 (where the kissing number is 240), and n = 24 (where it is 196,560).[9][10] The results in these dimensions stem from the existence of highly symmetrical lattices: the E8 lattice and the Leech lattice.
If arrangements are restricted to lattice arrangements, in which the centres of the spheres all lie on points in a lattice, then this restricted kissing number is known for n = 1 to 9 and n = 24 dimensions.[11] For 5, 6, and 7 dimensions the arrangement with the highest known kissing number found so far is the optimal lattice arrangement, but the existence of a non-lattice arrangement with a higher kissing number has not been excluded.
Some known bounds
The following table lists some known bounds on the kissing number in various dimensions.[12] The dimensions in which the kissing number is known are listed in boldface.
Dimension Lower
bound
Upper
bound
1 2
2 6
3 12
4 24[7]
5 40 44
6 72 78
7 126 134
8 240
9 306 363
10 500 553
11 582 869
12 840 1,356
13 1,154[13] 2,066
14 1,606[13] 3,177
15 2,564 4,858
16 4,320 7,332
17 5,346 11,014
18 7,398 16,469
19 10,668 24,575
20 17,400 36,402
21 27,720 53,878
22 49,896 81,376
23 93,150 123,328
24 196,560
Generalization
The kissing number problem can be generalized to the problem of finding the maximum number of non-overlapping congruent copies of any convex body that touch a given copy of the body. There are different versions of the problem depending on whether the copies are only required to be congruent to the original body, translates of the original body, or translated by a lattice. For the regular tetrahedron, for example, it is known that both the lattice kissing number and the translative kissing number are equal to 18, whereas the congruent kissing number is at least 56.[14]
Algorithms
There are several approximation algorithms on intersection graphs where the approximation ratio depends on the kissing number.[15] For example, there is a polynomial-time 10-approximation algorithm to find a maximum non-intersecting subset of a set of rotated unit squares.
Mathematical statement
The kissing number problem can be stated as the existence of a solution to a set of inequalities. Let $x_{n}$ be a set of N D-dimensional position vectors of the centres of the spheres. The condition that this set of spheres can lie round the centre sphere without overlapping is:[16]
$\exists x\ \left\{\forall _{n}\{x_{n}^{\textsf {T}}x_{n}=1\}\land \forall _{m,n:m\neq n}\{(x_{n}-x_{m})^{\textsf {T}}(x_{n}-x_{m})\geq 1\}\right\}$
Thus the problem for each dimension can be expressed in the existential theory of the reals. However, general methods of solving problems in this form take at least exponential time which is why this problem has only been solved up to four dimensions. By adding additional variables, $y_{nm}$ this can be converted to a single quartic equation in N(N − 1)/2 + DN variables:[17]
$\exists xy\ \left\{\sum _{n}\left(x_{n}^{\textsf {T}}x_{n}-1\right)^{2}+\sum _{m,n:m<n}{\Big (}(x_{n}-x_{m})^{\textsf {T}}(x_{n}-x_{m})-1-(y_{nm})^{2}{\Big )}^{2}=0\right\}$
Therefore, to solve the case in D = 5 dimensions and N = 40 + 1 vectors would be equivalent to determining the existence of real solutions to a quartic polynomial in 1025 variables. For the D = 24 dimensions and N = 196560 + 1, the quartic would have 19,322,732,544 variables. An alternative statement in terms of distance geometry is given by the distances squared $R_{mn}$ between the mth and nth sphere:
$\exists R\ \{\forall _{n}\{R_{0n}=1\}\land \forall _{m,n:m<n}\{R_{mn}\geq 1\}\}$
This must be supplemented with the condition that the Cayley–Menger determinant is zero for any set of points which forms an (D + 1) simplex in D dimensions, since that volume must be zero. Setting $R_{mn}=1+{y_{mn}}^{2}$ gives a set of simultaneous polynomial equations in just y which must be solved for real values only. The two methods, being entirely equivalent, have various different uses. For example, in the second case one can randomly alter the values of the y by small amounts to try to minimise the polynomial in terms of the y.
See also
• Equilateral dimension
• Spherical code
• Soddy's hexlet
• Cylinder sphere packing
Notes
1. Conway, John H.; Neil J.A. Sloane (1999). Sphere Packings, Lattices and Groups (3rd ed.). New York: Springer-Verlag. p. 21. ISBN 0-387-98585-9.
2. Brass, Peter; Moser, W. O. J.; Pach, János (2005). Research problems in discrete geometry. Springer. p. 93. ISBN 978-0-387-23815-9.
3. Mittelmann, Hans D.; Vallentin, Frank (2010). "High accuracy semidefinite programming bounds for kissing numbers". Experimental Mathematics. 19 (2): 174–178. arXiv:0902.1105. Bibcode:2009arXiv0902.1105M. doi:10.1080/10586458.2010.10129070. S2CID 218279.
4. Note that in one dimension, "spheres" are just pairs of points separated by the unit distance. (The vertical dimension of one-dimensional illustration is merely evocative.) Unlike in higher dimensions, it is necessary to specify that the interior of the spheres (the unit-length open intervals) do not overlap in order for there to be a finite packing density.
5. See also Lemma 3.1 in Marathe, M. V.; Breu, H.; Hunt, H. B.; Ravi, S. S.; Rosenkrantz, D. J. (1995). "Simple heuristics for unit disk graphs". Networks. 25 (2): 59. arXiv:math/9409226. doi:10.1002/net.3230250205.
6. Zong, Chuanming (2008). "The kissing number, blocking number and covering number of a convex body". In Goodman, Jacob E.; Pach, J├ínos; Pollack, Richard (eds.). Surveys on Discrete and Computational Geometry: Twenty Years Later (AMS-IMS-SIAM Joint Summer Research Conference, June 18ÔÇô22, 2006, Snowbird, Utah). Contemporary Mathematics. Vol. 453. Providence, RI: American Mathematical Society. pp. 529–548. doi:10.1090/conm/453/08812. ISBN 9780821842393. MR 2405694..
7. O. R. Musin (2003). "The problem of the twenty-five spheres". Russ. Math. Surv. 58 (4): 794–795. Bibcode:2003RuMaS..58..794M. doi:10.1070/RM2003v058n04ABEH000651. S2CID 250839515.
8. Pfender, Florian; Ziegler, Günter M. (September 2004). "Kissing numbers, sphere packings, and some unexpected proofs" (PDF). Notices of the American Mathematical Society: 873–883..
9. Levenshtein, Vladimir I. (1979). "О границах для упаковок в n-мерном евклидовом пространстве" [On bounds for packings in n-dimensional Euclidean space]. Doklady Akademii Nauk SSSR (in Russian). 245 (6): 1299–1303.
10. Odlyzko, A. M.; Sloane, N. J. A. (1979). "New bounds on the number of unit spheres that can touch a unit sphere in n dimensions". Journal of Combinatorial Theory. Series A. 26 (2): 210–214. doi:10.1016/0097-3165(79)90074-8.
11. Weisstein, Eric W. "Kissing Number". MathWorld.
12. Machado, Fabrício C.; Oliveira, Fernando M. (2018). "Improving the Semidefinite Programming Bound for the Kissing Number by Exploiting Polynomial Symmetry". Experimental Mathematics. 27 (3): 362–369. arXiv:1609.05167. doi:10.1080/10586458.2017.1286273. S2CID 52903026.
13. В. А. Зиновьев, Т. Эриксон (1999). Новые нижние оценки на контактное число для небольших размерностей. Пробл. Передачи Информ. (in Russian). 35 (4): 3–11. English translation: V. A. Zinov'ev, T. Ericson (1999). "New Lower Bounds for Contact Numbers in Small Dimensions". Problems of Information Transmission. 35 (4): 287–294. MR 1737742.
14. Lagarias, Jeffrey C.; Zong, Chuanming (December 2012). "Mysteries in packing regular tetrahedra" (PDF). Notices of the American Mathematical Society: 1540–1549.
15. Kammer, Frank; Tholey, Torsten (July 2012). "Approximation Algorithms for Intersection Graphs". Algorithmica. 68 (2): 312–336. doi:10.1007/s00453-012-9671-1. S2CID 3065780.
16. Numbers m and n run from 1 to N. $x=(x_{n})_{N}$ is the sequence of the N positional vectors. As the condition behind the second universal quantifier ($\forall $) does not change if m and n are exchanged, it is sufficient to let this quantor extend just over $m,n:m<n$. For simplification the sphere radiuses are assumed to be 1/2.
17. Concerning the matrix $y=(y_{mn})_{N\times {N}}$ only the entries having m < n are needed. Or, equivalent, the matrix can be assumed to be antisymmetric. Anyway the matrix has justN(N − 1)/2 free scalar variables. In addition, there are N D-dimensional vectors xn, which correspondent to a matrix $x=(x_{nd})_{N\times D}$ of N column vectors.
References
• T. Aste and D. Weaire The Pursuit of Perfect Packing (Institute Of Physics Publishing London 2000) ISBN 0-7503-0648-3
• Table of the Highest Kissing Numbers Presently Known maintained by Gabriele Nebe and Neil Sloane (lower bounds)
• Bachoc, Christine; Vallentin, Frank (2008). "New upper bounds for kissing numbers from semidefinite programming". Journal of the American Mathematical Society. 21 (3): 909–924. arXiv:math.MG/0608426. Bibcode:2008JAMS...21..909B. doi:10.1090/S0894-0347-07-00589-9. MR 2393433. S2CID 204096.
External links
• Grime, James. "Kissing Numbers" (video). youtube. Brady Haran. Archived from the original on 2021-12-12. Retrieved 11 October 2018.
Sir Isaac Newton
Publications
• Fluxions (1671)
• De Motu (1684)
• Principia (1687)
• Opticks (1704)
• Queries (1704)
• Arithmetica (1707)
• De Analysi (1711)
Other writings
• Quaestiones (1661–1665)
• "standing on the shoulders of giants" (1675)
• Notes on the Jewish Temple (c. 1680)
• "General Scholium" (1713; "hypotheses non fingo" )
• Ancient Kingdoms Amended (1728)
• Corruptions of Scripture (1754)
Contributions
• Calculus
• fluxion
• Impact depth
• Inertia
• Newton disc
• Newton polygon
• Newton–Okounkov body
• Newton's reflector
• Newtonian telescope
• Newton scale
• Newton's metal
• Spectrum
• Structural coloration
Newtonianism
• Bucket argument
• Newton's inequalities
• Newton's law of cooling
• Newton's law of universal gravitation
• post-Newtonian expansion
• parameterized
• gravitational constant
• Newton–Cartan theory
• Schrödinger–Newton equation
• Newton's laws of motion
• Kepler's laws
• Newtonian dynamics
• Newton's method in optimization
• Apollonius's problem
• truncated Newton method
• Gauss–Newton algorithm
• Newton's rings
• Newton's theorem about ovals
• Newton–Pepys problem
• Newtonian potential
• Newtonian fluid
• Classical mechanics
• Corpuscular theory of light
• Leibniz–Newton calculus controversy
• Newton's notation
• Rotating spheres
• Newton's cannonball
• Newton–Cotes formulas
• Newton's method
• generalized Gauss–Newton method
• Newton fractal
• Newton's identities
• Newton polynomial
• Newton's theorem of revolving orbits
• Newton–Euler equations
• Newton number
• kissing number problem
• Newton's quotient
• Parallelogram of force
• Newton–Puiseux theorem
• Absolute space and time
• Luminiferous aether
• Newtonian series
• table
Personal life
• Woolsthorpe Manor (birthplace)
• Cranbury Park (home)
• Early life
• Later life
• Apple tree
• Religious views
• Occult studies
• Scientific Revolution
• Copernican Revolution
Relations
• Catherine Barton (niece)
• John Conduitt (nephew-in-law)
• Isaac Barrow (professor)
• William Clarke (mentor)
• Benjamin Pulleyn (tutor)
• John Keill (disciple)
• William Stukeley (friend)
• William Jones (friend)
• Abraham de Moivre (friend)
Depictions
• Newton by Blake (monotype)
• Newton by Paolozzi (sculpture)
• Isaac Newton Gargoyle
• Astronomers Monument
Namesake
• Newton (unit)
• Newton's cradle
• Isaac Newton Institute
• Isaac Newton Medal
• Isaac Newton Telescope
• Isaac Newton Group of Telescopes
• XMM-Newton
• Sir Isaac Newton Sixth Form
• Statal Institute of Higher Education Isaac Newton
• Newton International Fellowship
Categories
Isaac Newton
Packing problems
Abstract packing
• Bin
• Set
Circle packing
• In a circle / equilateral triangle / isosceles right triangle / square
• Apollonian gasket
• Circle packing theorem
• Tammes problem (on sphere)
Sphere packing
• Apollonian
• Finite
• In a sphere
• In a cube
• In a cylinder
• Close-packing
• Kissing number
• Sphere-packing (Hamming) bound
Other 2-D packing
• Square packing
Other 3-D packing
• Tetrahedron
• Ellipsoid
Puzzles
• Conway
• Slothouber–Graatsma
|
Wikipedia
|
Sphere packing
In geometry, a sphere packing is an arrangement of non-overlapping spheres within a containing space. The spheres considered are usually all of identical size, and the space is usually three-dimensional Euclidean space. However, sphere packing problems can be generalised to consider unequal spheres, spaces of other dimensions (where the problem becomes circle packing in two dimensions, or hypersphere packing in higher dimensions) or to non-Euclidean spaces such as hyperbolic space.
A typical sphere packing problem is to find an arrangement in which the spheres fill as much of the space as possible. The proportion of space filled by the spheres is called the packing density of the arrangement. As the local density of a packing in an infinite space can vary depending on the volume over which it is measured, the problem is usually to maximise the average or asymptotic density, measured over a large enough volume.
For equal spheres in three dimensions, the densest packing uses approximately 74% of the volume. A random packing of equal spheres generally has a density around 63.5%.[1]
Classification and terminology
A lattice arrangement (commonly called a regular arrangement) is one in which the centers of the spheres form a very symmetric pattern which needs only n vectors to be uniquely defined (in n-dimensional Euclidean space). Lattice arrangements are periodic. Arrangements in which the spheres do not form a lattice (often referred to as irregular) can still be periodic, but also aperiodic (properly speaking non-periodic) or random. Because of their high degree of symmetry, lattice packings are easier to classify than non-lattice ones. Periodic lattices always have well-defined densities.
Regular packing
Dense packing
Main article: Close-packing of equal spheres
In three-dimensional Euclidean space, the densest packing of equal spheres is achieved by a family of structures called close-packed structures. One method for generating such a structure is as follows. Consider a plane with a compact arrangement of spheres on it. Call it A. For any three neighbouring spheres, a fourth sphere can be placed on top in the hollow between the three bottom spheres. If we do this for half of the holes in a second plane above the first, we create a new compact layer. There are two possible choices for doing this, call them B and C. Suppose that we chose B. Then one half of the hollows of B lies above the centers of the balls in A and one half lies above the hollows of A which were not used for B. Thus the balls of a third layer can be placed either directly above the balls of the first one, yielding a layer of type A, or above the holes of the first layer which were not occupied by the second layer, yielding a layer of type C. Combining layers of types A, B, and C produces various close-packed structures.
Two simple arrangements within the close-packed family correspond to regular lattices. One is called cubic close packing (or face-centred cubic, "FCC")—where the layers are alternated in the ABCABC... sequence. The other is called hexagonal close packing ("HCP")—where the layers are alternated in the ABAB... sequence. But many layer stacking sequences are possible (ABAC, ABCBA, ABCBAC, etc.), and still generate a close-packed structure. In all of these arrangements each sphere touches 12 neighboring spheres,[2] and the average density is:
${\frac {\pi }{3{\sqrt {2}}}}\simeq 0.74048$
In 1611, Johannes Kepler conjectured that this is the maximum possible density amongst both regular and irregular arrangements—this became known as the Kepler conjecture. Carl Friedrich Gauss proved in 1831 that these packings have the highest density amongst all possible lattice packings.[3] In 1998, Thomas Callister Hales, following the approach suggested by László Fejes Tóth in 1953, announced a proof of the Kepler conjecture. Hales' proof is a proof by exhaustion involving checking of many individual cases using complex computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof. On 10 August 2014, Hales announced the completion of a formal proof using automated proof checking, removing any doubt.[4]
Other common lattice packings
Some other lattice packings are often found in physical systems. These include the cubic lattice with a density of ${\frac {\pi }{6}}\approx 0.5236$, the hexagonal lattice with a density of ${\frac {\pi }{3{\sqrt {3}}}}\approx 0.6046$ and the tetrahedral lattice with a density of ${\frac {\pi {\sqrt {3}}}{16}}\approx 0.3401$, and loosest possible at a density of 0.0555.[5]
Jammed packings with a low density
Packings where all spheres are constrained by their neighbours to stay in one location are called rigid or jammed. The strictly jammed sphere packing with the lowest density is a diluted ("tunneled") fcc crystal with a density of only 0.49365.[6]
Irregular packing
If we attempt to build a densely packed collection of spheres, we will be tempted to always place the next sphere in a hollow between three packed spheres. If five spheres are assembled in this way, they will be consistent with one of the regularly packed arrangements described above. However, the sixth sphere placed in this way will render the structure inconsistent with any regular arrangement. This results in the possibility of a random close packing of spheres which is stable against compression.[7] Vibration of a random loose packing can result in the arrangement of spherical particles into regular packings, a process known as granular crystallisation. Such processes depend on the geometry of the container holding the spherical grains.[2]
When spheres are randomly added to a container and then compressed, they will generally form what is known as an "irregular" or "jammed" packing configuration when they can be compressed no more. This irregular packing will generally have a density of about 64%. Recent research predicts analytically that it cannot exceed a density limit of 63.4%[8] This situation is unlike the case of one or two dimensions, where compressing a collection of 1-dimensional or 2-dimensional spheres (that is, line segments or circles) will yield a regular packing.
Hypersphere packing
The sphere packing problem is the three-dimensional version of a class of ball-packing problems in arbitrary dimensions. In two dimensions, the equivalent problem is packing circles on a plane. In one dimension it is packing line segments into a linear universe.[9]
In dimensions higher than three, the densest regular packings of hyperspheres are known up to 8 dimensions.[10] Very little is known about irregular hypersphere packings; it is possible that in some dimensions the densest packing may be irregular. Some support for this conjecture comes from the fact that in certain dimensions (e.g. 10) the densest known irregular packing is denser than the densest known regular packing.[11]
In 2016, Maryna Viazovska announced a proof that the E8 lattice provides the optimal packing (regardless of regularity) in eight-dimensional space,[12] and soon afterwards she and a group of collaborators announced a similar proof that the Leech lattice is optimal in 24 dimensions.[13] This result built on and improved previous methods which showed that these two lattices are very close to optimal.[14] The new proofs involve using the Laplace transform of a carefully chosen modular function to construct a radially symmetric function f such that f and its Fourier transform f̂ both equal one at the origin, and both vanish at all other points of the optimal lattice, with f negative outside the central sphere of the packing and f̂ positive. Then, the Poisson summation formula for f is used to compare the density of the optimal lattice with that of any other packing.[15] Before the proof had been formally refereed and published, mathematician Peter Sarnak called the proof "stunningly simple" and wrote that "You just start reading the paper and you know this is correct."[16]
Another line of research in high dimensions is trying to find asymptotic bounds for the density of the densest packings. As of 2017, it is known that for large n, the densest lattice in dimension n has density between cn ⋅ 2−n (for some constant c) and 2−0.599n.[17] Conjectural bounds lie in between.[18]
Unequal sphere packing
See also: Unequal circle packing
Many problems in the chemical and physical sciences can be related to packing problems where more than one size of sphere is available. Here there is a choice between separating the spheres into regions of close-packed equal spheres, or combining the multiple sizes of spheres into a compound or interstitial packing. When many sizes of spheres (or a distribution) are available, the problem quickly becomes intractable, but some studies of binary hard spheres (two sizes) are available.
When the second sphere is much smaller than the first, it is possible to arrange the large spheres in a close-packed arrangement, and then arrange the small spheres within the octahedral and tetrahedral gaps. The density of this interstitial packing depends sensitively on the radius ratio, but in the limit of extreme size ratios, the smaller spheres can fill the gaps with the same density as the larger spheres filled space.[20] Even if the large spheres are not in a close-packed arrangement, it is always possible to insert some smaller spheres of up to 0.29099 of the radius of the larger sphere.[21]
When the smaller sphere has a radius greater than 0.41421 of the radius of the larger sphere, it is no longer possible to fit into even the octahedral holes of the close-packed structure. Thus, beyond this point, either the host structure must expand to accommodate the interstitials (which compromises the overall density), or rearrange into a more complex crystalline compound structure. Structures are known which exceed the close packing density for radius ratios up to 0.659786.[19][22]
Upper bounds for the density that can be obtained in such binary packings have also been obtained.[23]
In many chemical situations such as ionic crystals, the stoichiometry is constrained by the charges of the constituent ions. This additional constraint on the packing, together with the need to minimize the Coulomb energy of interacting charges leads to a diversity of optimal packing arrangements.
Hyperbolic space
Although the concept of circles and spheres can be extended to hyperbolic space, finding the densest packing becomes much more difficult. In a hyperbolic space there is no limit to the number of spheres that can surround another sphere (for example, Ford circles can be thought of as an arrangement of identical hyperbolic circles in which each circle is surrounded by an infinite number of other circles). The concept of average density also becomes much more difficult to define accurately. The densest packings in any hyperbolic space are almost always irregular.[24]
Despite this difficulty, K. Böröczky gives a universal upper bound for the density of sphere packings of hyperbolic n-space where n ≥ 2.[25] In three dimensions the Böröczky bound is approximately 85.327613%, and is realized by the horosphere packing of the order-6 tetrahedral honeycomb with Schläfli symbol {3,3,6}.[26] In addition to this configuration at least three other horosphere packings are known to exist in hyperbolic 3-space that realize the density upper bound.[27]
Touching pairs, triplets, and quadruples
The contact graph of an arbitrary finite packing of unit balls is the graph whose vertices correspond to the packing elements and whose two vertices are connected by an edge if the corresponding two packing elements touch each other. The cardinality of the edge set of the contact graph gives the number of touching pairs, the number of 3-cycles in the contact graph gives the number of touching triplets, and the number of tetrahedrons in the contact graph gives the number of touching quadruples (in general for a contact graph associated with a sphere packing in n dimensions that the cardinality of the set of n-simplices in the contact graph gives the number of touching (n + 1)-tuples in the sphere packing). In the case of 3-dimensional Euclidean space, non-trivial upper bounds on the number of touching pairs, triplets, and quadruples[28] were proved by Karoly Bezdek and Samuel Reid at the University of Calgary.
The problem of finding the arrangement of n identical spheres that maximizes the number of contact points between the spheres is known as the "sticky-sphere problem". The maximum is known for n ≤ 11, and only conjectural values are known for larger n.[29]
Other spaces
Sphere packing on the corners of a hypercube (with the spheres defined by Hamming distance) corresponds to designing error-correcting codes: if the spheres have radius t, then their centers are codewords of a (2t + 1)-error-correcting code. Lattice packings correspond to linear codes. There are other, subtler relationships between Euclidean sphere packing and error-correcting codes. For example, the binary Golay code is closely related to the 24-dimensional Leech lattice.
For further details on these connections, see the book Sphere Packings, Lattices and Groups by Conway and Sloane.[30]
See also
• Close-packing of equal spheres
• Apollonian sphere packing
• Finite sphere packing
• Hermite constant
• Inscribed sphere
• Kissing number
• Sphere-packing bound
• Random close pack
• Cylinder sphere packing
References
1. Wu, Yugong; Fan, Zhigang; Lu, Yuzhu (1 May 2003). "Bulk and interior packing densities of random close packing of hard spheres". Journal of Materials Science. 38 (9): 2019–2025. doi:10.1023/A:1023597707363. ISSN 1573-4803. S2CID 137583828.
2. Granular crystallisation in vibrated packings Granular Matter (2019), 21(2), 26 HAL Archives Ouvertes
3. Gauß, C. F. (1831). "Besprechung des Buchs von L. A. Seeber: Untersuchungen über die Eigenschaften der positiven ternären quadratischen Formen usw" [Discussion of L. A. Seeber's book: Studies on the characteristics of positive ternary quadratic forms etc]. Göttingsche Gelehrte Anzeigen.
4. "Long-term storage for Google Code Project Hosting". Google Code Archive.
5. "Wolfram Math World, Sphere packing".
6. Torquato, S.; Stillinger, F. H. (2007). "Toward the jamming threshold of sphere packings: Tunneled crystals". Journal of Applied Physics. 102 (9): 093511–093511–8. arXiv:0707.4263. Bibcode:2007JAP...102i3511T. doi:10.1063/1.2802184. S2CID 5704550.
7. Chaikin, Paul (June 2007). "Random thoughts". Physics Today. American Institute of Physics. 60 (6): 8. Bibcode:2007PhT....60f...8C. doi:10.1063/1.2754580. ISSN 0031-9228.
8. Song, C.; Wang, P.; Makse, H. A. (29 May 2008). "A phase diagram for jammed matter". Nature. 453 (7195): 629–632. arXiv:0808.2196. Bibcode:2008Natur.453..629S. doi:10.1038/nature06981. PMID 18509438. S2CID 4420652.
9. Griffith, J.S. (1962). "Packing of equal 0-spheres". Nature. 196 (4856): 764–765. Bibcode:1962Natur.196..764G. doi:10.1038/196764a0. S2CID 4262056.
10. Weisstein, Eric W. "Hypersphere Packing". MathWorld.
11. Sloane, N. J. A. (1998). "The Sphere-Packing Problem". Documenta Mathematica. 3: 387–396. arXiv:math/0207256. Bibcode:2002math......7256S.
12. Viazovska, Maryna (1 January 2017). "The sphere packing problem in dimension 8". Annals of Mathematics. 185 (3): 991–1015. arXiv:1603.04246. doi:10.4007/annals.2017.185.3.7. ISSN 0003-486X. S2CID 119286185.
13. Cohn, Henry; Kumar, Abhinav; Miller, Stephen; Radchenko, Danylo; Viazovska, Maryna (1 January 2017). "The sphere packing problem in dimension 24". Annals of Mathematics. 185 (3): 1017–1033. arXiv:1603.06518. doi:10.4007/annals.2017.185.3.8. ISSN 0003-486X. S2CID 119281758.
14. Cohn, Henry; Kumar, Abhinav (2009), "Optimality and uniqueness of the Leech lattice among lattices", Annals of Mathematics, 170 (3): 1003–1050, arXiv:math.MG/0403263, doi:10.4007/annals.2009.170.1003, ISSN 1939-8980, MR 2600869, S2CID 10696627, Zbl 1213.11144 Cohn, Henry; Kumar, Abhinav (2004), "The densest lattice in twenty-four dimensions", Electronic Research Announcements of the American Mathematical Society, 10 (7): 58–67, arXiv:math.MG/0408174, Bibcode:2004math......8174C, doi:10.1090/S1079-6762-04-00130-1, ISSN 1079-6762, MR 2075897, S2CID 15874595
15. Miller, Stephen D. (4 April 2016), The solution to the sphere packing problem in 24 dimensions via modular forms, Institute for Advanced Study, archived from the original on 21 December 2021. Video of an hour-long talk by one of Viazovska's co-authors explaining the new proofs.
16. Klarreich, Erica (30 March 2016), "Sphere Packing Solved in Higher Dimensions", Quanta Magazine
17. Cohn, Henry (2017), "A conceptual breakthrough in sphere packing" (PDF), Notices of the American Mathematical Society, 64 (2): 102–115, arXiv:1611.01685, doi:10.1090/noti1474, ISSN 0002-9920, MR 3587715, S2CID 16124591
18. Torquato, S.; Stillinger, F. H. (2006), "New conjectural lower bounds on the optimal density of sphere packings", Experimental Mathematics, 15 (3): 307–331, arXiv:math/0508381, doi:10.1080/10586458.2006.10128964, MR 2264469, S2CID 9921359
19. O'Toole, P. I.; Hudson, T. S. (2011). "New High-Density Packings of Similarly Sized Binary Spheres". The Journal of Physical Chemistry C. 115 (39): 19037. doi:10.1021/jp206115p.
20. Hudson, D. R. (1949). "Density and Packing in an Aggregate of Mixed Spheres". Journal of Applied Physics. 20 (2): 154–162. Bibcode:1949JAP....20..154H. doi:10.1063/1.1698327.
21. Zong, C. (2002). "From deep holes to free planes". Bulletin of the American Mathematical Society. 39 (4): 533–555. doi:10.1090/S0273-0979-02-00950-3.
22. Marshall, G. W.; Hudson, T. S. (2010). "Dense binary sphere packings". Contributions to Algebra and Geometry. 51 (2): 337–344.
23. de Laat, David; de Oliveira Filho, Fernando Mário; Vallentin, Frank (12 June 2012). "Upper bounds for packings of spheres of several radii". Forum of Mathematics, Sigma. 2. arXiv:1206.2608. doi:10.1017/fms.2014.24. S2CID 11082628.
24. Bowen, L.; Radin, C. (2002). "Densest Packing of Equal Spheres in Hyperbolic Space". Discrete and Computational Geometry. 29: 23–39. doi:10.1007/s00454-002-2791-7.
25. Böröczky, K. (1978). "Packing of spheres in spaces of constant curvature". Acta Mathematica Academiae Scientiarum Hungaricae. 32 (3–4): 243–261. doi:10.1007/BF01902361. S2CID 122561092.
26. Böröczky, K.; Florian, A. (1964). "Über die dichteste Kugelpackung im hyperbolischen Raum". Acta Mathematica Academiae Scientiarum Hungaricae. 15 (1–2): 237–245. doi:10.1007/BF01897041. S2CID 122081239.
27. Kozma, R. T.; Szirmai, J. (2012). "Optimally dense packings for fully asymptotic Coxeter tilings by horoballs of different types". Monatshefte für Mathematik. 168: 27–47. arXiv:1007.0722. doi:10.1007/s00605-012-0393-x. S2CID 119713174.
28. Bezdek, Karoly; Reid, Samuel (2013). "Contact Graphs of Sphere Packings Revisited". Journal of Geometry. 104 (1): 57–83. arXiv:1210.5756. doi:10.1007/s00022-013-0156-4. S2CID 14428585.
29. "The Science of Sticky Spheres". American Scientist. 6 February 2017. Retrieved 14 July 2020.
30. Conway, John H.; Sloane, Neil J. A. (1998). Sphere Packings, Lattices and Groups (3rd ed.). Springer Science & Business Media. ISBN 0-387-98585-9.
Bibliography
• Aste, T.; Weaire, D. (2000). The Pursuit of Perfect Packing. London: Institute of Physics Publishing. ISBN 0-7503-0648-3.
• Conway, J. H.; Sloane, N. J. H. (1998). Sphere Packings, Lattices and Groups (3rd ed.). ISBN 0-387-98585-9.
• Sloane, N. J. A. (1984). "The Packing of Spheres". Scientific American. 250: 116–125. Bibcode:1984SciAm.250e.116G. doi:10.1038/scientificamerican0584-116.
External links
• Dana Mackenzie (May 2002) "A fine mess" (New Scientist)
A non-technical overview of packing in hyperbolic space.
• Weisstein, Eric W. "Circle Packing". MathWorld.
• "Kugelpackungen (Sphere Packing)" (T. E. Dorozinski)
• "3D Sphere Packing Applet" Archived 26 April 2009 at the Wayback Machine Sphere Packing java applet
• "Densest Packing of spheres into a sphere" java applet
• "Database of sphere packings" (Erik Agrell)
Packing problems
Abstract packing
• Bin
• Set
Circle packing
• In a circle / equilateral triangle / isosceles right triangle / square
• Apollonian gasket
• Circle packing theorem
• Tammes problem (on sphere)
Sphere packing
• Apollonian
• Finite
• In a sphere
• In a cube
• In a cylinder
• Close-packing
• Kissing number
• Sphere-packing (Hamming) bound
Other 2-D packing
• Square packing
Other 3-D packing
• Tetrahedron
• Ellipsoid
Puzzles
• Conway
• Slothouber–Graatsma
|
Wikipedia
|
Sphere packing in a cube
In geometry, sphere packing in a cube is a three-dimensional sphere packing problem with the objective of packing spheres inside a cube. It is the three-dimensional equivalent of the circle packing in a square problem in two dimensions. The problem consists of determining the optimal packing of a given number of spheres inside the cube.
See also
• Packing problem
• Sphere packing in a cylinder
References
• Gensane, T. (2004). "Dense Packings of Equal Spheres in a Cube". The Electronic Journal of Combinatorics. 11 (1). ISSN 1077-8926.
Packing problems
Abstract packing
• Bin
• Set
Circle packing
• In a circle / equilateral triangle / isosceles right triangle / square
• Apollonian gasket
• Circle packing theorem
• Tammes problem (on sphere)
Sphere packing
• Apollonian
• Finite
• In a sphere
• In a cube
• In a cylinder
• Close-packing
• Kissing number
• Sphere-packing (Hamming) bound
Other 2-D packing
• Square packing
Other 3-D packing
• Tetrahedron
• Ellipsoid
Puzzles
• Conway
• Slothouber–Graatsma
|
Wikipedia
|
Sphere spectrum
In stable homotopy theory, a branch of mathematics, the sphere spectrum S is the monoidal unit in the category of spectra. It is the suspension spectrum of S0, i.e., a set of two points. Explicitly, the nth space in the sphere spectrum is the n-dimensional sphere Sn, and the structure maps from the suspension of Sn to Sn+1 are the canonical homeomorphisms. The k-th homotopy group of a sphere spectrum is the k-th stable homotopy group of spheres.
The localization of the sphere spectrum at a prime number p is called the local sphere at p and is denoted by $S_{(p)}$.
See also
• Chromatic homotopy theory
• Adams-Novikov spectral sequence
• Framed cobordism
References
• Adams, J. Frank (1974), Stable homotopy and generalised homology, Chicago Lectures in Mathematics, University of Chicago Press, MR 0402720
|
Wikipedia
|
Sphere theorem
In Riemannian geometry, the sphere theorem, also known as the quarter-pinched sphere theorem, strongly restricts the topology of manifolds admitting metrics with a particular curvature bound. The precise statement of the theorem is as follows. If M is a complete, simply-connected, n-dimensional Riemannian manifold with sectional curvature taking values in the interval $(1,4]$ then M is homeomorphic to the n-sphere. (To be precise, we mean the sectional curvature of every tangent 2-plane at each point must lie in $(1,4]$.) Another way of stating the result is that if M is not homeomorphic to the sphere, then it is impossible to put a metric on M with quarter-pinched curvature.
Note that the conclusion is false if the sectional curvatures are allowed to take values in the closed interval $[1,4]$. The standard counterexample is complex projective space with the Fubini–Study metric; sectional curvatures of this metric take on values between 1 and 4, with endpoints included. Other counterexamples may be found among the rank one symmetric spaces.
Differentiable sphere theorem
The original proof of the sphere theorem did not conclude that M was necessarily diffeomorphic to the n-sphere. This complication is because spheres in higher dimensions admit smooth structures that are not diffeomorphic. (For more information, see the article on exotic spheres.) However, in 2007 Simon Brendle and Richard Schoen utilized Ricci flow to prove that with the above hypotheses, M is necessarily diffeomorphic to the n-sphere with its standard smooth structure. Moreover, the proof of Brendle and Schoen only uses the weaker assumption of pointwise rather than global pinching. This result is known as the differentiable sphere theorem.
History of the sphere theorem
Heinz Hopf conjectured that a simply connected manifold with pinched sectional curvature is a sphere. In 1951, Harry Rauch showed that a simply connected manifold with curvature in [3/4,1] is homeomorphic to a sphere. In 1960, Marcel Berger and Wilhelm Klingenberg proved the topological version of the sphere theorem with the optimal pinching constant.
References
• Brendle, Simon (2010). Ricci Flow and the Sphere Theorem. Graduate Studies in Mathematics. Vol. 111. Providence, RI: American Mathematical Society. doi:10.1090/gsm/111. ISBN 0-8218-4938-7. MR 2583938.
• Brendle, Simon; Schoen, Richard (2009). "Manifolds with 1/4-pinched curvature are space forms". Journal of the American Mathematical Society. 22 (1): 287–307. arXiv:0705.0766. Bibcode:2009JAMS...22..287B. doi:10.1090/s0894-0347-08-00613-9. MR 2449060.
• Brendle, Simon; Schoen, Richard (2011). "Curvature, Sphere Theorems, and the Ricci Flow". Bulletin of the American Mathematical Society. 48 (1): 1–32. arXiv:1001.2278. doi:10.1090/s0273-0979-2010-01312-4. MR 2738904.
|
Wikipedia
|
Sphere–cylinder intersection
In the theory of analytic geometry for real three-dimensional space, the curve formed from the intersection between a sphere and a cylinder can be a circle, a point, the empty set, or a special type of curve.
For the analysis of this situation, assume (without loss of generality) that the axis of the cylinder coincides with the z-axis; points on the cylinder (with radius $r$) satisfy
$x^{2}+y^{2}=r^{2}.$
We also assume that the sphere, with radius $R$ is centered at a point on the positive x-axis, at point $(a,0,0)$. Its points satisfy
$(x-a)^{2}+y^{2}+z^{2}=R^{2}.$
The intersection is the collection of points satisfying both equations.
Trivial cases
Sphere lies entirely within cylinder
If $a+R<r$, the sphere lies entirely in the interior of the cylinder. The intersection is the empty set.
Sphere touches cylinder in one point
If the sphere is smaller than the cylinder ($R<r$) and $a+R=r$, the sphere lies in the interior of the cylinder except for one point. The intersection is the single point $(r,0,0)$.
Sphere centered on cylinder axis
If the center of the sphere lies on the axis of the cylinder, $a=0$. In that case, the intersection consists of two circles of radius $r$. These circles lie in the planes
$z=\pm {\sqrt {R^{2}-r^{2}}};$
If $r=R$, the intersection is a single circle in the plane $z=0$.
Non-trivial cases
Subtracting the two equations given above gives
$z^{2}+(r^{2}-R^{2}+a^{2})=2ax.$
Since $x$ is a quadratic function of $z$, the projection of the intersection onto the xz-plane is the section of an orthogonal parabola; it is only a section due to the fact that $-r<x<r$. The vertex of the parabola lies at point $(-b,0,0)$, where
$b={\frac {R^{2}-r^{2}-a^{2}}{2a}}.$
Intersection consists of two closed curves
If $R>r+a$, the condition $x<r$ cuts the parabola into two segments. In this case, the intersection of sphere and cylinder consists of two closed curves, which are mirror images of each other. Their projection in the xy-plane are circles of radius $r$.
Each part of the intersection can be parametrized by an angle $\phi $:
$(x,y,z)=\left(r\cos \phi ,r\sin \phi ,\pm {\sqrt {2a(b+r\cos \phi )}}\right).$
The curves contain the following extreme points:
$\left(-r,0,\pm {\sqrt {R^{2}-(r+a)^{2}}}\right);\quad \left(0,\pm r,\pm {\sqrt {R^{2}-(r-a)(r+a)}}\right);\quad \left(+r,0,\pm {\sqrt {R^{2}-(r-a)^{2}}}\right).$
Intersection is a single closed curve
If $R<r+a$, the intersection of sphere and cylinder consists of a single closed curve. It can be described by the same parameter equation as in the previous section, but the angle $\phi $ must be restricted to $-\phi _{0}<\phi <+\phi _{0}$, where $\cos \phi _{0}=-b/r$.
The curve contains the following extreme points:
$\left(-b,\pm {\sqrt {r^{2}-b^{2}}},0\right);\quad \left(0,\pm r,\pm {\sqrt {R^{2}-(r-a)(r+a)}}\right);\quad \left(+r,0,\pm {\sqrt {R^{2}-(r-a)^{2}}}\right).$
Limiting case
In the case $R=r+a$, the cylinder and sphere are tangential to each other at point $(r,0,0)$. The intersection resembles a figure eight: it is a closed curve which intersects itself. The above parametrization becomes
$(x,y,z)=\left(r\cos \phi ,r\sin \phi ,2{\sqrt {ar}}\cos {\frac {\phi }{2}}\right),$
where $\phi $ now goes through two full revolutions.
In the special case $a=r,R=2r$, the intersection is known as Viviani's curve. Its parameter representation is
$(x,y,z)=\left(r\cos \phi ,r\sin \phi ,R\cos {\frac {\phi }{2}}\right).$
The volume of the intersection of the two bodies, sometimes called Viviani's volume, is[1] [2] [3]
$V=2\int \int _{(x-a)^{2}+y^{2}\leq r^{2},x^{2}+y^{2}\leq R^{2}}{\sqrt {R^{2}-x^{2}-y^{2}}}dxdy=\left({\frac {2\pi }{3}}-{\frac {8}{9}}\right)R^{3}.$
See also
• Viviani's curve
• Graefe, Eva-Maria; Korsch, Hans J.; Strzys, Martin P. (2014). "Bose-Hubbard dimers, Viviani's windows and pendulum dynamics". J. Phys. A: Math. Theor. 47 (8): 085304. arXiv:1308.3569. Bibcode:2014JPhA...47h5304G. doi:10.1088/1751-8113/47/8/085304. S2CID 55754306.
References
1. Lamarche, F.; Leroy, Claude (1990). "Evaluation of the volume of intersection of a sphere with a cylinder by elliptic integrals". Comput. Phys. Commun. 59 (2): 359–369. Bibcode:1990CoPhC..59..359L. doi:10.1016/0010-4655(90)90184-3.
2. Viviani, V. (1692), """", Acta Eruditorum: 273–279
3. Woodhouse, Robert (1801). "VII. Demonstration of a theorem, by which such portions of the solidity of a sphere are assigned as admit an algebraic expression". Philos. Trans. R. Soc. Lond. 91: 153. doi:10.1098/rstl.1801.0009. S2CID 122654753.
|
Wikipedia
|
Spherical 3-manifold
In mathematics, a spherical 3-manifold M is a 3-manifold of the form
$M=S^{3}/\Gamma $
where $\Gamma $ is a finite subgroup of SO(4) acting freely by rotations on the 3-sphere $S^{3}$. All such manifolds are prime, orientable, and closed. Spherical 3-manifolds are sometimes called elliptic 3-manifolds or Clifford-Klein manifolds.
Properties
A spherical 3-manifold $S^{3}/\Gamma $ has a finite fundamental group isomorphic to Γ itself. The elliptization conjecture, proved by Grigori Perelman, states that conversely all compact 3-manifolds with finite fundamental group are spherical manifolds.
The fundamental group is either cyclic, or is a central extension of a dihedral, tetrahedral, octahedral, or icosahedral group by a cyclic group of even order. This divides the set of such manifolds into 5 classes, described in the following sections.
The spherical manifolds are exactly the manifolds with spherical geometry, one of the 8 geometries of Thurston's geometrization conjecture.
Cyclic case (lens spaces)
The manifolds $S^{3}/\Gamma $ with Γ cyclic are precisely the 3-dimensional lens spaces. A lens space is not determined by its fundamental group (there are non-homeomorphic lens spaces with isomorphic fundamental groups); but any other spherical manifold is.
Three-dimensional lens spaces arise as quotients of $S^{3}\subset \mathbb {C} ^{2}$ by the action of the group that is generated by elements of the form
${\begin{pmatrix}\omega &0\\0&\omega ^{q}\end{pmatrix}}.$
where $\omega =e^{2\pi i/p}$. Such a lens space $L(p;q)$ has fundamental group $\mathbb {Z} /p\mathbb {Z} $ for all $q$, so spaces with different $p$ are not homotopy equivalent. Moreover, classifications up to homeomorphism and homotopy equivalence are known, as follows. The three-dimensional spaces $L(p;q_{1})$ and $L(p;q_{2})$ are:
1. homotopy equivalent if and only if $q_{1}q_{2}\equiv \pm n^{2}{\pmod {p}}$ for some $n\in \mathbb {N} ;$ ;}
2. homeomorphic if and only if $q_{1}\equiv \pm q_{2}^{\pm 1}{\pmod {p}}.$
In particular, the lens spaces L(7,1) and L(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic.
The lens space L(1,0) is the 3-sphere, and the lens space L(2,1) is 3 dimensional real projective space.
Lens spaces can be represented as Seifert fiber spaces in many ways, usually as fiber spaces over the 2-sphere with at most two exceptional fibers, though the lens space with fundamental group of order 4 also has a representation as a Seifert fiber space over the projective plane with no exceptional fibers.
Dihedral case (prism manifolds)
A prism manifold is a closed 3-dimensional manifold M whose fundamental group is a central extension of a dihedral group.
The fundamental group π1(M) of M is a product of a cyclic group of order m with a group having presentation
$\langle x,y\mid xyx^{-1}=y^{-1},x^{2^{k}}=y^{n}\rangle $
for integers k, m, n with k ≥ 1, m ≥ 1, n ≥ 2 and m coprime to 2n.
Alternatively, the fundamental group has presentation
$\langle x,y\mid xyx^{-1}=y^{-1},x^{2m}=y^{n}\rangle $
for coprime integers m, n with m ≥ 1, n ≥ 2. (The n here equals the previous n, and the m here is 2k-1 times the previous m.)
We continue with the latter presentation. This group is a metacyclic group of order 4mn with abelianization of order 4m (so m and n are both determined by this group). The element y generates a cyclic normal subgroup of order 2n, and the element x has order 4m. The center is cyclic of order 2m and is generated by x2, and the quotient by the center is the dihedral group of order 2n.
When m = 1 this group is a binary dihedral or dicyclic group. The simplest example is m = 1, n = 2, when π1(M) is the quaternion group of order 8.
Prism manifolds are uniquely determined by their fundamental groups: if a closed 3-manifold has the same fundamental group as a prism manifold M, it is homeomorphic to M.
Prism manifolds can be represented as Seifert fiber spaces in two ways.
Tetrahedral case
The fundamental group is a product of a cyclic group of order m with a group having presentation
$\langle x,y,z\mid (xy)^{2}=x^{2}=y^{2},zxz^{-1}=y,zyz^{-1}=xy,z^{3^{k}}=1\rangle $
for integers k, m with k ≥ 1, m ≥ 1 and m coprime to 6.
Alternatively, the fundamental group has presentation
$\langle x,y,z\mid (xy)^{2}=x^{2}=y^{2},zxz^{-1}=y,zyz^{-1}=xy,z^{3m}=1\rangle $
for an odd integer m ≥ 1. (The m here is 3k-1 times the previous m.)
We continue with the latter presentation. This group has order 24m. The elements x and y generate a normal subgroup isomorphic to the quaternion group of order 8. The center is cyclic of order 2m. It is generated by the elements z3 and x2 = y2, and the quotient by the center is the tetrahedral group, equivalently, the alternating group A4.
When m = 1 this group is the binary tetrahedral group.
These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 3.
Octahedral case
The fundamental group is a product of a cyclic group of order m coprime to 6 with the binary octahedral group (of order 48) which has the presentation
$\langle x,y\mid (xy)^{2}=x^{3}=y^{4}\rangle .$
These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 4.
Icosahedral case
The fundamental group is a product of a cyclic group of order m coprime to 30 with the binary icosahedral group (order 120) which has the presentation
$\langle x,y\mid (xy)^{2}=x^{3}=y^{5}\rangle .$
When m is 1, the manifold is the Poincaré homology sphere.
These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 5.
References
• Peter Orlik, Seifert manifolds, Lecture Notes in Mathematics, vol. 291, Springer-Verlag (1972). ISBN 0-387-06014-6
• William Jaco, Lectures on 3-manifold topology ISBN 0-8218-1693-4
• William Thurston, Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, New Jersey, 1997. ISBN 0-691-08304-5
|
Wikipedia
|
Spherical Bernstein's problem
The spherical Bernstein's problem is a possible generalization of the original Bernstein's problem in the field of global differential geometry, first proposed by Shiing-Shen Chern in 1969, and then later in 1970, during his plenary address at the International Congress of Mathematicians in Nice.
The problem
Are the equators in $\mathbb {S} ^{n+1}$ the only smooth embedded minimal hypersurfaces which are topological $n$-dimensional spheres?
Additionally, the spherical Bernstein's problem, while itself a generalization of the original Bernstein's problem, can, too, be generalized further by replacing the ambient space $\mathbb {S} ^{n+1}$ by a simply-connected, compact symmetric space. Some results in this direction are due to Wu-Chung Hsiang and Wu-Yi Hsiang work.
Alternative formulations
Below are two alternative ways to express the problem:
The second formulation
Let the (n − 1) sphere be embedded as a minimal hypersurface in $S^{n}$(1). Is it necessarily an equator?
By the Almgren–Calabi theorem, it's true when n = 3 (or n = 2 for the 1st formulation).
Wu-Chung Hsiang proved it for n ∈ {4, 5, 6, 7, 8, 10, 12, 14} (or n ∈ {3, 4, 5, 6, 7, 9, 11, 13}, respectively)
In 1987, Per Tomter proved it for all even n (or all odd n, respectively).
Thus, it only remains unknown for all odd n ≥ 9 (or all even n ≥ 8, respectively)
The third formulation
Is it true that an embedded, minimal hypersphere inside the Euclidean $n$-sphere is necessarily an equator?
Geometrically, the problem is analogous to the following problem:
Is the local topology at an isolated singular point of a minimal hypersurface necessarily different from that of a disc?
For example, the affirmative answer for spherical Bernstein problem when n = 3 is equivalent to the fact that the local topology at an isolated singular point of any minimal hypersurface in an arbitrary Riemannian 4-manifold must be different from that of a disc.
Further reading
• F.J. Almgren, Jr., Some interior regularity theorems for minimal surfaces and an extension of the Bernstein's theorem, Annals of Mathematics, volume 85, number 1 (1966), pp. 277–292
• E. Calabi, Minimal immersions of surfaces in euclidean spaces, Journal of Differential Geometry, volume 1 (1967), pp. 111–125
• P. Tomter, The spherical Bernstein problem in even dimensions and related problems, Acta Mathematica, volume 158 (1987), pp. 189–212
• S.S. Chern, Brief survey of minimal submanifolds, Tagungsbericht (1969), Mathematisches Forschungsinstitut Oberwolfach
• S.S. Chern, Differential geometry, its past and its future, Actes du Congrès international des mathématiciens (Nice, 1970), volume 1, pp. 41–53, Gauthier-Villars, (1971)
• W.Y. Hsiang, W.T. Hsiang, P. Tomter, On the existence of minimal hyperspheres in compact symmetric spaces, Annales Scientifiques de l'École Normale Supérieure, volume 21 (1988), pp. 287–305
|
Wikipedia
|
Spherical trigonometry
Spherical trigonometry is the branch of spherical geometry that deals with the metrical relationships between the sides and angles of spherical triangles, traditionally expressed using trigonometric functions. On the sphere, geodesics are great circles. Spherical trigonometry is of great importance for calculations in astronomy, geodesy, and navigation.
The origins of spherical trigonometry in Greek mathematics and the major developments in Islamic mathematics are discussed fully in History of trigonometry and Mathematics in medieval Islam. The subject came to fruition in Early Modern times with important developments by John Napier, Delambre and others, and attained an essentially complete form by the end of the nineteenth century with the publication of Todhunter's textbook Spherical trigonometry for the use of colleges and Schools.[1] Since then, significant developments have been the application of vector methods, quaternion methods, and the use of numerical methods.
Preliminaries
Spherical polygons
A spherical polygon is a polygon on the surface of the sphere. Its sides are arcs of great circles—the spherical geometry equivalent of line segments in plane geometry.
Such polygons may have any number of sides greater than 1. Two-sided spherical polygons—lunes, also called digons or bi-angles—are bounded by two great-circle arcs: a familiar example is the curved outward-facing surface of a segment of an orange. Three arcs serve to define a spherical triangle, the principal subject of this article. Polygons with higher numbers of sides (4-sided spherical quadrilaterals, 5-sided spherical pentagons, etc.) are defined in similar manner. Analogously to their plane counterparts, spherical polygons with more than 3 sides can always be treated as the composition of spherical triangles.
One spherical polygon with interesting properties is the pentagramma mirificum, a 5-sided spherical star polygon with a right angle at every vertex.
From this point in the article, discussion will be restricted to spherical triangles, referred to simply as triangles.
Notation
• Both vertices and angles at the vertices of a triangle are denoted by the same upper case letters: A, B, and C.
• Sides are denoted by lower-case letters: a, b, and c.
• The angle A (respectively, B, C) may be regarded either as the angle between the two planes that intersect the sphere at the vertex A, or, equivalently, as the angle between the tangents of the great circle arcs where they meet at the vertex.
• Angles are expressed in radians. The angles of proper spherical triangles are (by convention) less than π, so that π < A + B + C < 3π (Todhunter,[1] Art.22,32). In particular, the sum of the angles of a spherical triangle is strictly greater than the sum of the angles of a triangle defined on the Euclidean plane, which is always exactly π radians.
• Sides are also expressed in radians. A side (regarded as a great circle arc) is measured by the angle that it subtends at the centre. On the unit sphere, this radian measure is numerically equal to the arc length. By convention, the sides of proper spherical triangles are less than π, so that 0 < a + b + c < 2π (Todhunter,[1] Art.22,32).
• The sphere's radius is taken as unity. For specific practical problems on a sphere of radius R the measured lengths of the sides must be divided by R before using the identities given below. Likewise, after a calculation on the unit sphere the sides a, b, c must be multiplied by R.
Polar triangles
The polar triangle associated with a triangle ABC is defined as follows. Consider the great circle that contains the side BC. This great circle is defined by the intersection of a diametral plane with the surface. Draw the normal to that plane at the centre: it intersects the surface at two points and the point that is on the same side of the plane as A is (conventionally) termed the pole of A and it is denoted by A′. The points B′ and C′ are defined similarly.
The triangle A′B′C′ is the polar triangle corresponding to triangle ABC. A very important theorem (Todhunter,[1] Art.27) proves that the angles and sides of the polar triangle are given by
${\begin{alignedat}{3}A'&=\pi -a,&\qquad B'&=\pi -b,&\qquad C'&=\pi -c,\\a'&=\pi -A,&b'&=\pi -B,&c'&=\pi -C.\end{alignedat}}$
Therefore, if any identity is proved for the triangle ABC then we can immediately derive a second identity by applying the first identity to the polar triangle by making the above substitutions. This is how the supplemental cosine equations are derived from the cosine equations. Similarly, the identities for a quadrantal triangle can be derived from those for a right-angled triangle. The polar triangle of a polar triangle is the original triangle.
Cosine rules and sine rules
Cosine rules
Main article: Spherical law of cosines
The cosine rule is the fundamental identity of spherical trigonometry: all other identities, including the sine rule, may be derived from the cosine rule:
$\cos a=\cos b\cos c+\sin b\sin c\cos A,\!$
$\cos b=\cos c\cos a+\sin c\sin a\cos B,\!$
$\cos c=\cos a\cos b+\sin a\sin b\cos C,\!$
These identities generalize the cosine rule of plane trigonometry, to which they are asymptotically equivalent in the limit of small interior angles. (On the unit sphere, if $a,b,c\rightarrow 0$ set $\sin a\approx a$ and $(\cos a-\cos b)^{2}\approx 0$ etc.; see Spherical law of cosines.)
Sine rules
Main article: Spherical law of sines
The spherical law of sines is given by the formula
${\frac {\sin A}{\sin a}}={\frac {\sin B}{\sin b}}={\frac {\sin C}{\sin c}}.$
These identities approximate the sine rule of plane trigonometry when the sides are much smaller than the radius of the sphere.
Derivation of the cosine rule
Main article: Spherical law of cosines
The spherical cosine formulae were originally proved by elementary geometry and the planar cosine rule (Todhunter,[1] Art.37). He also gives a derivation using simple coordinate geometry and the planar cosine rule (Art.60). The approach outlined here uses simpler vector methods. (These methods are also discussed at Spherical law of cosines.)
Consider three unit vectors OA, OB and OC drawn from the origin to the vertices of the triangle (on the unit sphere). The arc BC subtends an angle of magnitude a at the centre and therefore OB·OC = cos a. Introduce a Cartesian basis with OA along the z-axis and OB in the xz-plane making an angle c with the z-axis. The vector OC projects to ON in the xy-plane and the angle between ON and the x-axis is A. Therefore, the three vectors have components:
OA $(0,\,0,\,1)$ OB $(\sin c,\,0,\,\cos c)$ OC $(\sin b\cos A,\,\sin b\sin A,\,\cos b)$.
The scalar product OB·OC in terms of the components is
OB·OC${}=\sin c\sin b\cos A+\cos c\cos b$.
Equating the two expressions for the scalar product gives
$\cos a=\cos b\cos c+\sin b\sin c\cos A.$
This equation can be re-arranged to give explicit expressions for the angle in terms of the sides:
$\cos A={\frac {\cos a-\cos b\cos c}{\sin b\sin c}}.$
The other cosine rules are obtained by cyclic permutations.
Derivation of the sine rule
Main article: Spherical law of sines
This derivation is given in Todhunter,[1] (Art.40). From the identity $\sin ^{2}A=1-\cos ^{2}A$ and the explicit expression for $\cos A$ given immediately above
${\begin{aligned}\sin ^{2}A&=1-\left({\frac {\cos a-\cos b\cos c}{\sin b\sin c}}\right)^{2}\\[5pt]&={\frac {(1-\cos ^{2}b)(1-\cos ^{2}c)-(\cos a-\cos b\cos c)^{2}}{\sin ^{2}\!b\,\sin ^{2}\!c}}\\[5pt]{\frac {\sin A}{\sin a}}&={\frac {[1-\cos ^{2}\!a-\cos ^{2}\!b-\cos ^{2}\!c+2\cos a\cos b\cos c]^{1/2}}{\sin a\sin b\sin c}}.\end{aligned}}$
Since the right hand side is invariant under a cyclic permutation of $a,\;b,\;c$ the spherical sine rule follows immediately.
Alternative derivations
There are many ways of deriving the fundamental cosine and sine rules and the other rules developed in the following sections. For example, Todhunter[1] gives two proofs of the cosine rule (Articles 37 and 60) and two proofs of the sine rule (Articles 40 and 42). The page on Spherical law of cosines gives four different proofs of the cosine rule. Text books on geodesy[2] and spherical astronomy[3] give different proofs and the online resources of MathWorld provide yet more.[4] There are even more exotic derivations, such as that of Banerjee[5] who derives the formulae using the linear algebra of projection matrices and also quotes methods in differential geometry and the group theory of rotations.
The derivation of the cosine rule presented above has the merits of simplicity and directness and the derivation of the sine rule emphasises the fact that no separate proof is required other than the cosine rule. However, the above geometry may be used to give an independent proof of the sine rule. The scalar triple product, OA·(OB × OC) evaluates to $\sin b\sin c\sin A$ in the basis shown. Similarly, in a basis oriented with the z-axis along OB, the triple product OB·(OC × OA) evaluates to $\sin c\sin a\sin B$. Therefore, the invariance of the triple product under cyclic permutations gives $\sin b\sin A=\sin a\sin B$ which is the first of the sine rules. See curved variations of the law of sines to see details of this derivation.
Identities
Supplemental cosine rules
Applying the cosine rules to the polar triangle gives (Todhunter,[1] Art.47), i.e. replacing A by π – a, a by π – A etc.,
${\begin{aligned}\cos A&=-\cos B\,\cos C+\sin B\,\sin C\,\cos a,\\\cos B&=-\cos C\,\cos A+\sin C\,\sin A\,\cos b,\\\cos C&=-\cos A\,\cos B+\sin A\,\sin B\,\cos c.\end{aligned}}$
Cotangent four-part formulae
The six parts of a triangle may be written in cyclic order as (aCbAcB). The cotangent, or four-part, formulae relate two sides and two angles forming four consecutive parts around the triangle, for example (aCbA) or (BaCb). In such a set there are inner and outer parts: for example in the set (BaCb) the inner angle is C, the inner side is a, the outer angle is B, the outer side is b. The cotangent rule may be written as (Todhunter,[1] Art.44)
$\cos({\mathsf {inner}}\ {\mathsf {side}})\cos({\mathsf {inner}}\ {\mathsf {angle}})=\cot({\mathsf {outer}}\ {\mathsf {side}})\sin({\mathsf {inner}}\ {\mathsf {side}})-\cot({\mathsf {outer}}\ {\mathsf {angle}})\sin({\mathsf {inner}}\ {\mathsf {angle}}),$
and the six possible equations are (with the relevant set shown at right):
${\begin{array}{lll}{\text{(CT1)}}\quad &\cos b\,\cos C=\cot a\,\sin b-\cot A\,\sin C,\qquad &(aCbA)\\[0ex]{\text{(CT2)}}&\cos b\,\cos A=\cot c\,\sin b-\cot C\,\sin A,&(CbAc)\\[0ex]{\text{(CT3)}}&\cos c\,\cos A=\cot b\,\sin c-\cot B\,\sin A,&(bAcB)\\[0ex]{\text{(CT4)}}&\cos c\,\cos B=\cot a\,\sin c-\cot A\,\sin B,&(AcBa)\\[0ex]{\text{(CT5)}}&\cos a\,\cos B=\cot c\,\sin a-\cot C\,\sin B,&(cBaC)\\[0ex]{\text{(CT6)}}&\cos a\,\cos C=\cot b\,\sin a-\cot B\,\sin C,&(BaCb).\end{array}}$
To prove the first formula start from the first cosine rule and on the right-hand side substitute for $\cos c$ from the third cosine rule:
${\begin{aligned}\cos a&=\cos b\cos c+\sin b\sin c\cos A\\&=\cos b\ (\cos a\cos b+\sin a\sin b\cos C)+\sin b\sin C\sin a\cot A\\\cos a\sin ^{2}b&=\cos b\sin a\sin b\cos C+\sin b\sin C\sin a\cot A.\end{aligned}}$
The result follows on dividing by $\sin a\sin b$. Similar techniques with the other two cosine rules give CT3 and CT5. The other three equations follow by applying rules 1, 3 and 5 to the polar triangle.
Half-angle and half-side formulae
With $2s=(a+b+c)$ and $2S=(A+B+C),$
${\begin{aligned}&\sin {\tfrac {1}{2}}A=\left[{\frac {\sin(s-b)\sin(s-c)}{\sin b\sin c}}\right]^{1/2}&\qquad &\sin {\tfrac {1}{2}}a=\left[{\frac {-\cos S\cos(S-A)}{\sin B\sin C}}\right]^{1/2}\\[2ex]&\cos {\tfrac {1}{2}}A=\left[{\frac {\sin s\sin(s-a)}{\sin b\sin c}}\right]^{1/2}&\qquad &\cos {\tfrac {1}{2}}a=\left[{\frac {\cos(S-B)\cos(S-C)}{\sin B\sin C}}\right]^{1/2}\\[2ex]&\tan {\tfrac {1}{2}}A=\left[{\frac {\sin(s-b)\sin(s-c)}{\sin s\sin(s-a)}}\right]^{1/2}&\qquad &\tan {\tfrac {1}{2}}a=\left[{\frac {-\cos S\cos(S-A)}{\cos(S-B)\cos(S-C)}}\right]^{1/2}\end{aligned}}$
Another twelve identities follow by cyclic permutation.
The proof (Todhunter,[1] Art.49) of the first formula starts from the identity 2sin2(A/2) = 1 – cosA, using the cosine rule to express A in terms of the sides and replacing the sum of two cosines by a product. (See sum-to-product identities.) The second formula starts from the identity 2cos2(A/2) = 1 + cosA, the third is a quotient and the remainder follow by applying the results to the polar triangle.
Delambre analogies
The Delambre analogies (also called Gauss analogies) were published independently by Delambre, Gauss, and Mollweide in 1807–1809.[6]
${\begin{aligned}{\frac {\sin {\tfrac {1}{2}}(A+B)}{\cos {\tfrac {1}{2}}C}}={\frac {\cos {\tfrac {1}{2}}(a-b)}{\cos {\tfrac {1}{2}}c}}&\qquad \qquad &{\frac {\sin {\tfrac {1}{2}}(A-B)}{\cos {\tfrac {1}{2}}C}}={\frac {\sin {\tfrac {1}{2}}(a-b)}{\sin {\tfrac {1}{2}}c}}\\[2ex]{\frac {\cos {\tfrac {1}{2}}(A+B)}{\sin {\tfrac {1}{2}}C}}={\frac {\cos {\tfrac {1}{2}}(a+b)}{\cos {\tfrac {1}{2}}c}}&\qquad &{\frac {\cos {\tfrac {1}{2}}(A-B)}{\sin {\tfrac {1}{2}}C}}={\frac {\sin {\tfrac {1}{2}}(a+b)}{\sin {\tfrac {1}{2}}c}}\end{aligned}}$
Another eight identities follow by cyclic permutation.
Proved by expanding the numerators and using the half angle formulae. (Todhunter,[1] Art.54 and Delambre[7])
Napier's analogies
${\begin{aligned}&&\\[-2ex]\displaystyle {\tan {\tfrac {1}{2}}(A+B)}={\frac {\cos {\tfrac {1}{2}}(a-b)}{\cos {\tfrac {1}{2}}(a+b)}}\cot {{\tfrac {1}{2}}C}&\qquad &{\tan {\tfrac {1}{2}}(a+b)}={\frac {\cos {\tfrac {1}{2}}(A-B)}{\cos {\tfrac {1}{2}}(A+B)}}\tan {{\tfrac {1}{2}}c}\\[2ex]{\tan {\tfrac {1}{2}}(A-B)}={\frac {\sin {\tfrac {1}{2}}(a-b)}{\sin {\tfrac {1}{2}}(a+b)}}\cot {{\tfrac {1}{2}}C}&\qquad &{\tan {\tfrac {1}{2}}(a-b)}={\frac {\sin {\tfrac {1}{2}}(A-B)}{\sin {\tfrac {1}{2}}(A+B)}}\tan {{\tfrac {1}{2}}c}\end{aligned}}$
Another eight identities follow by cyclic permutation.
These identities follow by division of the Delambre formulae. (Todhunter,[1] Art.52)
Taking quotients of these yields the law of tangents, first stated by Persian mathematician Nasir al-Din al-Tusi (1201–1274),
${\frac {\tan {\tfrac {1}{2}}(A-B)}{\tan {\tfrac {1}{2}}(A+B)}}={\frac {\tan {\tfrac {1}{2}}(a-b)}{\tan {\tfrac {1}{2}}(a+b)}}$
Napier's rules for right spherical triangles
When one of the angles, say C, of a spherical triangle is equal to π/2 the various identities given above are considerably simplified. There are ten identities relating three elements chosen from the set a, b, c, A, B.
Napier[8] provided an elegant mnemonic aid for the ten independent equations: the mnemonic is called Napier's circle or Napier's pentagon (when the circle in the above figure, right, is replaced by a pentagon).
First, write the six parts of the triangle (three vertex angles, three arc angles for the sides) in the order they occur around any circuit of the triangle: for the triangle shown above left, going clockwise starting with a gives aCbAcB. Next replace the parts that are not adjacent to C (that is A, c, B) by their complements and then delete the angle C from the list. The remaining parts can then be drawn as five ordered, equal slices of a pentagram, or circle, as shown in the above figure (right). For any choice of three contiguous parts, one (the middle part) will be adjacent to two parts and opposite the other two parts. The ten Napier's Rules are given by
• sine of the middle part = the product of the tangents of the adjacent parts
• sine of the middle part = the product of the cosines of the opposite parts
For an example, starting with the sector containing $a$ we have:
$\sin a=\tan(\pi /2-B)\,\tan b=\cos(\pi /2-c)\,\cos(\pi /2-A)=\cot B\,\tan b=\sin c\,\sin A.$
The full set of rules for the right spherical triangle is (Todhunter,[1] Art.62)
${\begin{alignedat}{4}&{\text{(R1)}}&\qquad \cos c&=\cos a\,\cos b,&\qquad \qquad &{\text{(R6)}}&\qquad \tan b&=\cos A\,\tan c,\\&{\text{(R2)}}&\sin a&=\sin A\,\sin c,&&{\text{(R7)}}&\tan a&=\cos B\,\tan c,\\&{\text{(R3)}}&\sin b&=\sin B\,\sin c,&&{\text{(R8)}}&\cos A&=\sin B\,\cos a,\\&{\text{(R4)}}&\tan a&=\tan A\,\sin b,&&{\text{(R9)}}&\cos B&=\sin A\,\cos b,\\&{\text{(R5)}}&\tan b&=\tan B\,\sin a,&&{\text{(R10)}}&\cos c&=\cot A\,\cot B.\end{alignedat}}$
Napier's rules for quadrantal triangles
A quadrantal spherical triangle is defined to be a spherical triangle in which one of the sides subtends an angle of π/2 radians at the centre of the sphere: on the unit sphere the side has length π/2. In the case that the side c has length π/2 on the unit sphere the equations governing the remaining sides and angles may be obtained by applying the rules for the right spherical triangle of the previous section to the polar triangle A'B'C' with sides a',b',c' such that A' = π − a, a' = π − A etc. The results are:
${\begin{alignedat}{4}&{\text{(Q1)}}&\qquad \cos C&=-\cos A\,\cos B,&\qquad \qquad &{\text{(Q6)}}&\qquad \tan B&=-\cos a\,\tan C,\\&{\text{(Q2)}}&\sin A&=\sin a\,\sin C,&&{\text{(Q7)}}&\tan A&=-\cos b\,\tan C,\\&{\text{(Q3)}}&\sin B&=\sin b\,\sin C,&&{\text{(Q8)}}&\cos a&=\sin b\,\cos A,\\&{\text{(Q4)}}&\tan A&=\tan a\,\sin B,&&{\text{(Q9)}}&\cos b&=\sin a\,\cos B,\\&{\text{(Q5)}}&\tan B&=\tan b\,\sin A,&&{\text{(Q10)}}&\cos C&=-\cot a\,\cot b.\end{alignedat}}$
Five-part rules
Substituting the second cosine rule into the first and simplifying gives:
$\cos a=(\cos a\,\cos c+\sin a\,\sin c\,\cos B)\cos c+\sin b\,\sin c\,\cos A$
$\cos a\,\sin ^{2}c=\sin a\,\cos c\,\sin c\,\cos B+\sin b\,\sin c\,\cos A$
Cancelling the factor of $\sin c$ gives
$\cos a\sin c=\sin a\,\cos c\,\cos B+\sin b\,\cos A$
Similar substitutions in the other cosine and supplementary cosine formulae give a large variety of 5-part rules. They are rarely used.
Cagnoli's Equation
Multiplying the first cosine rule by $\cos A$ gives
$\cos a\cos A=\cos b\,\cos c\,\cos A+\sin b\,\sin c-\sin b\,\sin c\,\sin ^{2}A.$
Similarly multiplying the first supplementary cosine rule by $\cos a$ yields
$\cos a\cos A=-\cos B\,\cos C\,\cos a+\sin B\,\sin C-\sin B\,\sin C\,\sin ^{2}a.$
Subtracting the two and noting that it follows from the sine rules that $\sin b\,\sin c\,\sin ^{2}A=\sin B\,\sin C\,\sin ^{2}a$ produces Cagnoli's equation
$\sin b\,\sin c+\cos b\,\cos c\,\cos A=\sin B\,\sin C-\cos B\,\cos C\,\cos a$
which is a relation between the six parts of the spherical triangle.[9]
Solution of triangles
Main article: Solution of triangles § Solving spherical triangles
Oblique triangles
The solution of triangles is the principal purpose of spherical trigonometry: given three, four or five elements of the triangle, determine the others. The case of five given elements is trivial, requiring only a single application of the sine rule. For four given elements there is one non-trivial case, which is discussed below. For three given elements there are six cases: three sides, two sides and an included or opposite angle, two angles and an included or opposite side, or three angles. (The last case has no analogue in planar trigonometry.) No single method solves all cases. The figure below shows the seven non-trivial cases: in each case the given sides are marked with a cross-bar and the given angles with an arc. (The given elements are also listed below the triangle). In the summary notation here such as ASA, A refers to a given angle and S refers to a given side, and the sequence of A's and S's in the notation refers to the corresponding sequence in the triangle.
• Case 1: three sides given (SSS). The cosine rule may be used to give the angles A, B, and C but, to avoid ambiguities, the half angle formulae are preferred.
• Case 2: two sides and an included angle given (SAS). The cosine rule gives a and then we are back to Case 1.
• Case 3: two sides and an opposite angle given (SSA). The sine rule gives C and then we have Case 7. There are either one or two solutions.
• Case 4: two angles and an included side given (ASA). The four-part cotangent formulae for sets (cBaC) and (BaCb) give c and b, then A follows from the sine rule.
• Case 5: two angles and an opposite side given (AAS). The sine rule gives b and then we have Case 7 (rotated). There are either one or two solutions.
• Case 6: three angles given (AAA). The supplemental cosine rule may be used to give the sides a, b, and c but, to avoid ambiguities, the half-side formulae are preferred.
• Case 7: two angles and two opposite sides given (SSAA). Use Napier's analogies for a and A; or, use Case 3 (SSA) or case 5 (AAS).
The solution methods listed here are not the only possible choices: many others are possible. In general it is better to choose methods that avoid taking an inverse sine because of the possible ambiguity between an angle and its supplement. The use of half-angle formulae is often advisable because half-angles will be less than π/2 and therefore free from ambiguity. There is a full discussion in Todhunter. The article Solution of triangles#Solving spherical triangles presents variants on these methods with a slightly different notation.
There is a full discussion of the solution of oblique triangles in Todhunter.[1]: Chap. VI See also the discussion in Ross.[10]
Solution by right-angled triangles
Another approach is to split the triangle into two right-angled triangles. For example, take the Case 3 example where b, c, B are given. Construct the great circle from A that is normal to the side BC at the point D. Use Napier's rules to solve the triangle ABD: use c and B to find the sides AD, BD and the angle BAD. Then use Napier's rules to solve the triangle ACD: that is use AD and b to find the side DC and the angles C and DAC. The angle A and side a follow by addition.
Numerical considerations
Not all of the rules obtained are numerically robust in extreme examples, for example when an angle approaches zero or π. Problems and solutions may have to be examined carefully, particularly when writing code to solve an arbitrary triangle.
Area and spherical excess
See also: Solid angle and Geodesic polygon
Consider an N-sided spherical polygon and let An denote the n-th interior angle. The area of such a polygon is given by (Todhunter,[1] Art.99)
${\text{Area of polygon (on the unit sphere)}}\equiv E_{N}=\left(\sum _{n=1}^{N}A_{n}\right)-(N-2)\pi .$
For the case of triangle this reduces to Girard's theorem
${\text{Area of triangle (on the unit sphere)}}\equiv E=E_{3}=A+B+C-\pi ,$
where E is the amount by which the sum of the angles exceeds π radians. The quantity E is called the spherical excess of the triangle. This theorem is named after its author, Albert Girard.[11] An earlier proof was derived, but not published, by the English mathematician Thomas Harriot. On a sphere of radius R both of the above area expressions are multiplied by R2. The definition of the excess is independent of the radius of the sphere.
The converse result may be written as
$A+B+C=\pi +{\frac {4\pi \times {\text{Area of triangle}}}{\text{Area of the sphere}}}.$
Since the area of a triangle cannot be negative the spherical excess is always positive. It is not necessarily small, because the sum of the angles may attain 5π (3π for proper angles). For example, an octant of a sphere is a spherical triangle with three right angles, so that the excess is π/2. In practical applications it is often small: for example the triangles of geodetic survey typically have a spherical excess much less than 1' of arc. (Rapp[12] Clarke,[13] Legendre's theorem on spherical triangles). On the Earth the excess of an equilateral triangle with sides 21.3 km (and area 393 km2) is approximately 1 arc second.
There are many formulae for the excess. For example, Todhunter,[1] (Art.101—103) gives ten examples including that of L'Huilier:
$\tan {\tfrac {1}{4}}E={\sqrt {\tan {\tfrac {1}{2}}s\,\tan {\tfrac {1}{2}}(s-a)\,\tan {\tfrac {1}{2}}(s-b)\,\tan {\tfrac {1}{2}}(s-c)}}$
where $s=(a+b+c)/2$.
Because some triangles are badly characterized by their edges (e.g., if $ a=b\approx {\frac {1}{2}}c$), it is often better to use the formula for the excess in terms of two edges and their included angle
$\tan {\tfrac {1}{2}}E={\frac {\tan {\frac {1}{2}}a\tan {\frac {1}{2}}b\sin C}{1+\tan {\frac {1}{2}}a\tan {\frac {1}{2}}b\cos C}}.$
When triangle $ABC$ is a right triangle with right angle at $C,$ then $\cos C=0$ and $\sin C=1,$ so this reduces to
$\tan {\tfrac {1}{2}}E=\tan {\tfrac {1}{2}}a\tan {\tfrac {1}{2}}b.$
Angle deficit is defined similarly for hyperbolic geometry.
From latitude and longitude
The spherical excess of a spherical quadrangle bounded by the equator, the two meridians of longitudes $\lambda _{1}$ and $\lambda _{2},$ and the great-circle arc between two points with longitude and latitude $(\lambda _{1},\varphi _{1})$ and $(\lambda _{2},\varphi _{2})$ is
$\tan {\tfrac {1}{2}}E_{4}={\frac {\sin {\tfrac {1}{2}}(\varphi _{2}+\varphi _{1})}{\cos {\tfrac {1}{2}}(\varphi _{2}-\varphi _{1})}}\tan {\tfrac {1}{2}}(\lambda _{2}-\lambda _{1}).$
This result is obtained from one of Napier's analogies. In the limit where $\varphi _{1},\varphi _{2},\lambda _{2}-\lambda _{1}$ are all small, this reduces to the familiar trapezoidal area, $ E_{4}\approx {\frac {1}{2}}(\varphi _{2}+\varphi _{1})(\lambda _{2}-\lambda _{1})$.
The area of a polygon can be calculated from individual quadrangles of the above type, from (analogously) individual triangle bounded by a segment of the polygon and two meridians,[14] by a line integral with Green's theorem,[15] or via an equal-area projection as commonly done in GIS. The other algorithms can still be used with the side lengths calculated using a great-circle distance formula.
See also
• Air navigation
• Celestial navigation
• Ellipsoidal trigonometry
• Great-circle distance or spherical distance
• Lenart sphere
• Schwarz triangle
• Spherical geometry
• Spherical polyhedron
• Triangulation (surveying)
References
1. Todhunter, I. (1886). Spherical Trigonometry (5th ed.). MacMillan. Archived from the original on 2020-04-14. Retrieved 2013-07-28.
2. Clarke, Alexander Ross (1880). Geodesy. Oxford: Clarendon Press. OCLC 2484948 – via the Internet Archive.
3. Smart, W.M. (1977). Text-Book on Spherical Astronomy (6th ed.). Cambridge University Press. Chapter 1 – via the Internet Archive.
4. Weisstein, Eric W. "Spherical Trigonometry". MathWorld. Retrieved 8 April 2018.
5. Banerjee, Sudipto (2004), "Revisiting Spherical Trigonometry with Orthogonal Projectors", The College Mathematics Journal, Mathematical Association of America, 35 (5): 375–381, doi:10.1080/07468342.2004.11922099, JSTOR 4146847, S2CID 122277398, archived from the original on 2020-07-22, retrieved 2016-01-10
6. Todhunter, Isaac (1873). "Note on the history of certain formulæ in spherical trigonometry". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 45 (298): 98–100. doi:10.1080/14786447308640820.
7. Delambre, J. B. J. (1807). Connaissance des Tems 1809. p. 445. Archived from the original on 2020-07-22. Retrieved 2016-05-14.
8. Napier, J (1614). Mirifici Logarithmorum Canonis Constructio. p. 50. Archived from the original on 2013-04-30. Retrieved 2016-05-14. An 1889 translation The Construction of the Wonderful Canon of Logarithms is available as en e-book from Abe Books Archived 2020-03-03 at the Wayback Machine
9. Chauvenet, William (1867). A Treatise on Plane and Spherical Trigonometry. Philadelphia: J. B. Lippincott & Co. p. 165. Archived from the original on 2021-07-11. Retrieved 2021-07-11.
10. Ross, Debra Anne. Master Math: Trigonometry, Career Press, 2002.
11. Another proof of Girard's theorem may be found at Archived 2012-10-31 at the Wayback Machine.
12. Rapp, Richard H. (1991). Geometric Geodesy Part I (PDF). p. 89. (pdf page 99),
13. Clarke, Alexander Ross (1880). Geodesy. Clarendon Press. (Chapters 2 and 9). Recently republished at Forgotten Books Archived 2020-10-03 at the Wayback Machine
14. Chamberlain, Robert G.; Duquette, William H. (17 April 2007). Some algorithms for polygons on a sphere. Association of American Geographers Annual Meeting. NASA JPL. Archived from the original on 22 July 2020. Retrieved 7 August 2020.
15. "Surface area of polygon on sphere or ellipsoid – MATLAB areaint". www.mathworks.com. Archived from the original on 2021-05-01. Retrieved 2021-05-01.
External links
• Weisstein, Eric W. "Spherical Trigonometry". MathWorld. a more thorough list of identities, with some derivation
• Weisstein, Eric W. "Spherical Triangle". MathWorld. a more thorough list of identities, with some derivation
• TriSph A free software to solve the spherical triangles, configurable to different practical applications and configured for gnomonic
• "Revisiting Spherical Trigonometry with Orthogonal Projectors" by Sudipto Banerjee. The paper derives the spherical law of cosines and law of sines using elementary linear algebra and projection matrices.
• "A Visual Proof of Girard's Theorem". Wolfram Demonstrations Project. by Okay Arik
• "The Book of Instruction on Deviant Planes and Simple Planes", a manuscript in Arabic that dates back to 1740 and talks about spherical trigonometry, with diagrams
• Some Algorithms for Polygons on a Sphere Robert G. Chamberlain, William H. Duquette, Jet Propulsion Laboratory. The paper develops and explains many useful formulae, perhaps with a focus on navigation and cartography.
• Online computation of spherical triangles
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
|
Wikipedia
|
Spherical category
In category theory, a branch of mathematics, a spherical category is a pivotal category (a monoidal category with traces) in which left and right traces coincide.[1] Spherical fusion categories give rise to a family of three-dimensional topological state sum models (a particular formulation of a topological quantum field theory), the Turaev-Viro model, or rather Turaev-Viro-Barrett-Westbury model.[2]
References
1. John W. Barrett, Bruce W. Westbury (1999). "Spherical Categories". Advances in Mathematics. 143 (2): 357–375. arXiv:hep-th/9310164. doi:10.1006/aima.1998.1800.{{cite journal}}: CS1 maint: uses authors parameter (link)
2. "Turaev-Viro model". nLab. Retrieved 7 August 2017.
|
Wikipedia
|
Spherical code
In geometry and coding theory, a spherical code with parameters (n,N,t) is a set of N points on the unit hypersphere in n dimensions for which the dot product of unit vectors from the origin to any two points is less than or equal to t. The kissing number problem may be stated as the problem of finding the maximal N for a given n for which a spherical code with parameters (n,N,1/2) exists. The Tammes problem may be stated as the problem of finding a spherical code with minimal t for given n and N.
External links
• Weisstein, Eric W. "Spherical code". MathWorld.
• A library of putatively optimal spherical codes
|
Wikipedia
|
Spherical contact distribution function
In probability and statistics, a spherical contact distribution function, first contact distribution function,[1] or empty space function[2] is a mathematical function that is defined in relation to mathematical objects known as point processes, which are types of stochastic processes often used as mathematical models of physical phenomena representable as randomly positioned points in time, space or both.[1][3] More specifically, a spherical contact distribution function is defined as probability distribution of the radius of a sphere when it first encounters or makes contact with a point in a point process. This function can be contrasted with the nearest neighbour function, which is defined in relation to some point in the point process as being the probability distribution of the distance from that point to its nearest neighbouring point in the same point process.
The spherical contact function is also referred to as the contact distribution function,[2] but some authors[1] define the contact distribution function in relation to a more general set, and not simply a sphere as in the case of the spherical contact distribution function.
Spherical contact distribution functions are used in the study of point processes[2][3][4] as well as the related fields of stochastic geometry[1] and spatial statistics,[2][5] which are applied in various scientific and engineering disciplines such as biology, geology, physics, and telecommunications.[1][3][6][7]
Point process notation
Point processes are mathematical objects that are defined on some underlying mathematical space. Since these processes are often used to represent collections of points randomly scattered in space, time or both, the underlying space is usually d-dimensional Euclidean space denoted here by $\textstyle {\textbf {R}}^{d}$, but they can be defined on more abstract mathematical spaces.[4]
Point processes have a number of interpretations, which is reflected by the various types of point process notation.[1][7] For example, if a point $\textstyle x$ belongs to or is a member of a point process, denoted by $\textstyle {N}$, then this can be written as:[1]
$\textstyle x\in {N},$
and represents the point process being interpreted as a random set. Alternatively, the number of points of $\textstyle {N}$ located in some Borel set $\textstyle B$ is often written as:[1][5][6]
$\textstyle {N}(B),$
which reflects a random measure interpretation for point processes. These two notations are often used in parallel or interchangeably.[1][5][6]
Definitions
Spherical contact distribution function
The spherical contact distribution function is defined as:
$H_{s}(r)=1-P({N}(b(o,r))=0).$
where b(o,r) is a ball with radius r centered at the origin o. In other words, spherical contact distribution function is the probability there are no points from the point process located in a hyper-sphere of radius r.
Contact distribution function
The spherical contact distribution function can be generalized for sets other than the (hyper-)sphere in $\textstyle {\textbf {R}}^{d}$. For some Borel set $\textstyle B$ with positive volume (or more specifically, Lebesgue measure), the contact distribution function (with respect to $\textstyle B$) for $\textstyle r\geq 0$ is defined by the equation:[1]
$H_{B}(r)=P({N}(rB)=0).$
Examples
Poisson point process
For a Poisson point process $\textstyle {N}$ on $\textstyle {\textbf {R}}^{d}$ with intensity measure $\textstyle \Lambda $ this becomes
$H_{s}(r)=1-e^{-\Lambda (b(o,r))},$
which for the homogeneous case becomes
$H_{s}(r)=1-e^{-\lambda |b(o,r)|},$
where $\textstyle |b(o,r)|$ denotes the volume (or more specifically, the Lebesgue measure) of the ball of radius $\textstyle r$. In the plane $\textstyle {\textbf {R}}^{2}$, this expression simplifies to
$H_{s}(r)=1-e^{-\lambda \pi r^{2}}.$
Relationship to other functions
Nearest neighbour function
In general, the spherical contact distribution function and the corresponding nearest neighbour function are not equal. However, these two functions are identical for Poisson point processes.[1] In fact, this characteristic is due to a unique property of Poisson processes and their Palm distributions, which forms part of the result known as the Slivnyak-Mecke[6] or Slivnyak's theorem.[2]
J-function
The fact that the spherical distribution function Hs(r) and nearest neighbour function Do(r) are identical for the Poisson point process can be used to statistically test if point process data appears to be that of a Poisson point process. For example, in spatial statistics the J-function is defined for all r ≥ 0 as:[1]
$J(r)={\frac {1-D_{o}(r)}{1-H_{s}(r)}}$
For a Poisson point process, the J function is simply J(r)=1, hence why it is used as a non-parametric test for whether data behaves as though it were from a Poisson process. It is, however, thought possible to construct non-Poisson point processes for which J(r)=1,[8] but such counterexamples are viewed as somewhat 'artificial' by some and exist for other statistical tests.[9]
More generally, J-function serves as one way (others include using factorial moment measures[2]) to measure the interaction between points in a point process.[1]
See also
• Nearest neighbour function
• Factorial moment measure
• Moment measure
References
1. D. Stoyan, W. S. Kendall, J. Mecke, and L. Ruschendorf. Stochastic geometry and its applications, edition 2. Wiley Chichester, 1995.
2. A. Baddeley, I. Bárány, and R. Schneider. Spatial point processes and their applications. Stochastic Geometry: Lectures given at the CIME Summer School held in Martina Franca, Italy, September 13--18, 2004, pages 1--75, 2007.
3. D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. I. Probability and its Applications (New York). Springer, New York, second edition, 2003.
4. D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. {II}. Probability and its Applications (New York). Springer, New York, second edition, 2008.
5. J. Moller and R. P. Waagepetersen. Statistical inference and simulation for spatial point processes. CRC Press, 2003.
6. F. Baccelli and B. Błaszczyszyn. Stochastic Geometry and Wireless Networks, Volume I – Theory, volume 3, No 3-4 of Foundations and Trends in Networking. NoW Publishers, 2009.
7. F. Baccelli and B. Błaszczyszyn. Stochastic Geometry and Wireless Networks, Volume II – Applications, volume 4, No 1-2 of Foundations and Trends in Networking. NoW Publishers, 2009.
8. Bedford, T, Van den Berg, J (1997). "A remark on the Van Lieshout and Baddeley J-function for point processes". Advances in Applied Probability. JSTOR. 29 (1): 19–25. doi:10.2307/1427858. JSTOR 1427858. S2CID 122029903.{{cite journal}}: CS1 maint: multiple names: authors list (link)
9. Foxall, Rob, Baddeley, Adrian (2002). "Nonparametric measures of association between a spatial point process and a random set, with geological applications". Journal of the Royal Statistical Society, Series C. Wiley Online Library. 51 (2): 165–182. doi:10.1111/1467-9876.00261. S2CID 744061.{{cite journal}}: CS1 maint: multiple names: authors list (link)
|
Wikipedia
|
Spherical conic
In mathematics, a spherical conic or sphero-conic is a curve on the sphere, the intersection of the sphere with a concentric elliptic cone. It is the spherical analog of a conic section (ellipse, parabola, or hyperbola) in the plane, and as in the planar case, a spherical conic can be defined as the locus of points the sum or difference of whose great-circle distances to two foci is constant.[1] By taking the antipodal point to one focus, every spherical ellipse is also a spherical hyperbola, and vice versa. As a space curve, a spherical conic is a quartic, though its orthogonal projections in three principal axes are planar conics. Like planar conics, spherical conics also satisfy a "reflection property": the great-circle arcs from the two foci to any point on the conic have the tangent and normal to the conic at that point as their angle bisectors.
Many theorems about conics in the plane extend to spherical conics. For example, Graves's theorem and Ivory's theorem about confocal conics can also be proven on the sphere; see confocal conic sections about the planar versions.[2]
Just as the arc length of an ellipse is given by an incomplete elliptic integral of the second kind, the arc length of a spherical conic is given by an incomplete elliptic integral of the third kind.[3]
An orthogonal coordinate system in Euclidean space based on concentric spheres and quadratic cones is called a conical or sphero-conical coordinate system. When restricted to the surface of a sphere, the remaining coordinates are confocal spherical conics. Sometimes this is called an elliptic coordinate system on the sphere, by analogy to a planar elliptic coordinate system. Such coordinates can be used in the computation of conformal maps from the sphere to the plane.[4]
Applications
The solution of the Kepler problem in a space of uniform positive curvature is a spherical conic, with a potential proportional to the cotangent of geodesic distance.[5]
Because it preserves distances to a pair of specified points, the two-point equidistant projection maps the family of confocal conics on the sphere onto two families of confocal ellipses and hyperbolae in the plane.[6]
If a portion of the Earth is modeled as spherical, e.g. using the osculating sphere at a point on an ellipsoid of revolution, the hyperbolae used in hyperbolic navigation (which determines position based on the difference in received signal timing from fixed radio transmitters) are spherical conics.[7]
Notes
1. Fuss, Nicolas (1788). "De proprietatibus quibusdam ellipseos in superficie sphaerica descriptae" [On certain properties of ellipses described on a spherical surface]. Nova Acta academiae scientiarum imperialis Petropolitanae (in Latin). 3: 90–99.
2. Stachel, Hellmuth; Wallner, Johannes (2004). "Ivory's theorem in hyperbolic spaces" (PDF). Siberian Mathematical Journal. 45 (4): 785–794.
3. Gudermann, Christoph (1835). "Integralia elliptica tertiae speciei reducendi methodus simplicior, quae simul ad ipsorum applicationem facillimam et computum numericum expeditum perducit. Sectionum conico–sphaericarum qudratura et rectification" [A simpler method of reducing elliptic integrals of the third kind, providing easy application and convenient numerical computation: Quadrature and rectification of conico-spherical sections]. Crelle's Journal. 14: 169–181.
Booth, James (1844). "IV. On the rectification and quadrature of the spherical ellipse". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 25 (163): 18–38. doi:10.1080/14786444408644925.
4. Guyou, Émile (1887). "Nouveau système de projection de la sphère: Généralisation de la projection de Mercator" [New sphere projection system: Generalization of the Mercator projection]. Annales Hydrographiques. Ser. 2 (in French). 9: 16–35.
Adams, Oscar Sherman (1925). Elliptic functions applied to conformal world maps (PDF). US Government Printing Office. US Coast and Geodetic Survey Special Publication No. 112.
5. Higgs, Peter W. (1979). "Dynamical symmetries in a spherical geometry I". Journal of Physics A: Mathematical and General. 12 (3): 309–323. doi:10.1088/0305-4470/12/3/006.
Kozlov, Valery Vasilevich; Harin, Alexander O. (1992). "Kepler's problem in constant curvature spaces". Celestial Mechanics and Dynamical Astronomy. 54 (4): 393–399. doi:10.1007/BF00049149.
Cariñena, José F.; Rañada, Manuel F.; Santander, Mariano (2005). "Central potentials on spaces of constant curvature: The Kepler problem on the two-dimensional sphere S2 and the hyperbolic plane H2". Journal of Mathematical Physics. 46 (5): 052702. arXiv:math-ph/0504016. doi:10.1063/1.1893214.
Arnold, Vladimir; Kozlov, Valery Vasilevich; Neishtadt, Anatoly I. (2007). Mathematical Aspects of Classical and Celestial Mechanics. doi:10.1007/978-3-540-48926-9.
Diacu, Florin (2013). "The curved N-body problem: risks and rewards" (PDF). Mathematical Intelligencer. 35 (3): 24–33.
6. Cox, Jacques-François (1946). "The doubly equidistant projection". Bulletin Géodésique. 2 (1): 74–76. doi:10.1007/bf02521618.
7. Razin, Sheldon (1967). "Explicit (Noniterative) Loran Solution". Navigation. 14 (3): 265–269. doi:10.1002/j.2161-4296.1967.tb02208.x.
Freiesleben, Hans-Christian (1976). "Spherical hyperbolae and ellipses". The Journal of Navigation. 29 (2): 194–199. doi:10.1017/S0373463300030186.
References
• Chasles, Michel (1831). Mémoire de géométrie sur les propriétés générales des coniqes sphériques [Geometrical memoir on the general properties of spherical conics] (in French). L'Académie de Bruxelles. English edition:
— (1841). Two geometrical memoirs on the general properties of cones of the second degree, and on the spherical conics. Translated by Graves, Charles. Grant and Bolton.
• Chasles, Michel (1860). "Résumé d'une théorie des coniques sphériques homofocales" [Summary of a theory of confocal spherical conics]. Comptes rendus de l'Académie des Sciences (in French). 50: 623–633. Republished in Journal de mathématiques pures et appliquées. Ser. 2. 5: 425-454. PDF from mathdoc.fr.
• Glaeser, Georg; Stachel, Hellmuth; Odehnal, Boris (2016). "10.1 Spherical conics". The Universe of Conics: From the ancient Greeks to 21st century developments. Springer. pp. 436–467. doi:10.1007/978-3-662-45450-3_10.
• Izmestiev, Ivan (2019). "Spherical and hyperbolic conics". Eighteen Essays in Non-Euclidean Geometry. European Mathematical Society. pp. 262–320. doi:10.4171/196-1/15.
• Salmon, George (1927). "X. Cones and Sphero-Conics". A Treatise on the Analytic Geometry of Three Dimensions (7th ed.). Chelsea. pp. 249–267.
• Story, William Edward (1882). "On non-Euclidean properties of conics" (PDF). American Journal of Mathematics. 5 (1): 358–381. doi:10.2307/2369551.
• Sykes, Gerrit Smith (1877). "Spherical Conics". Proceedings of the American Academy of Arts and Sciences. 13: 375–395. doi:10.2307/25138501.
• Tranacher, Harald (2006). Sphärische Kegelschnitte – didaktisch aufbereitet [Spherical conics – didactically prepared] (PDF) (Thesis) (in German). Technischen Universität Wien.
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.